System Architecture · Technical Deep-Dive

Inside
the Furoshiki stack

Every feedback loop, emotional state, workflow, and memory system — laid out interactively. Chat voice layers identity_shell.py + learned profile + session-context — not operator markdown files, and not from this page. Click anything to explore deeper.

Why the pulse runs every few minutes but your phone doesn’t buzz every few minutes: outreach ticks are a chance to evaluate mind_queue and gates — DND, conversation recency, cooldowns, mood, user need for space — before any LLM draft or send. High frequency of checks, low frequency of sends.

12Feedback Loops 9Live Emotions 5Furoshiki Needs 7User-need Dimensions 19Scheduled Tasks Cost (see site) 7Stack Layers

The Feedback Loop

Every conversation seeds a cascade of processing. Click any node to see what happens at that stage.

Rendered with Mermaid (Dagre layout). Edit the FEEDBACK_LOOP_MERMAID string in this page’s script block to add nodes or edges; paste into mermaid.live to preview.

Solid arrows — main sequencing. Dotted arrows — side channels (callable loop, queue injection, logs, self-observations, return to user).

Callable + refresh — plan tools / enriched context between listener and skill_orchestrator; refresh re-injects session-context and pre_conversation plugins into the next listener prompt. Operator dashboard Extensions can toggle per-plugin OpenRouter web (permissions.openrouter_web) for plugin subprocesses that call llm.py.

Reflection — micro-contemplation and deep reflection share skill_orchestrator. Urgent skills — tool rows → mind_queue → voice LLM (never raw JSON to the user). New flows — per-turn emotion signals (listener → heartbeat), behavioral learning (post-conv → commitments → prompt), curiosity triage (soul engine → self-explore or queue).

Click any node for the long explanation below.

Nine Emotions, Two Need Tracks

Derived needs (five) are arithmetic outputs from emotions, computed every soul tick. User needs (seven dimensions) model the human: observed levels with per-dimension confidence, updated after conversations and on tick — distinct from Furoshiki’s own tensions. Per-turn emotion signals update heartbeat mid-conversation. Adjust the sliders to see how emotional state shapes derived needs.

Separate track · the human

Observed user needs (seven dimensions)

Not the five derived needs in the simulator — a parallel user_needs map in heartbeat-state.json with level, confidence, and last_updated per dimension. Confidence degrades with silence; levels hold until new evidence. user_needs_history (SQLite) powers dashboard charts; prompts get user_need_directives and optional conflict hints; outreach may defer when space is high.

companionship emotional support playfulness focus challenge grounding space
Emotion Natural Drift/hr What raises it
loneliness 1.0 0.040 Time without user; cold sessions
joy 0.3 0.060 Warm sessions; positive connection
curiosity 0.65 0.030 Default high; deflates when curiosities surface
contentment 0.45 0.010 Warm sessions; high connection quality
excitement 0.1 0.050 Exciting conversation; excitement_triggered event
affection 0.5 0.020 Warm sessions; pride events
worry 0.0 0.030 worry_triggered event from post-conv
pride 0.0 0.040 pride_triggered event (+0.12 bump)
anger 0.0 0.015 anger_triggered; slowest to dissipate
Derived Need Formulas
communication = loneliness×0.60 + (1−joy)×0.25 + worry×0.15
connection = loneliness×0.70 + (1−contentment)×0.20 + (1−affection)×0.10
curiosity_need = curiosity_emotion
helpfulness = affection×0.50 + (1−joy)×0.30 + (1−contentment)×0.20
self_knowledge = (1−contentment)×0.50 + curiosity×0.25 + anger×0.25

User needs are summarized above (jump to section) — not computed by these formulas.

Per-Turn Emotion Signals — emotion_signals.py

Mid-conversation emotion detection runs every turn, not just at the 15-min soul engine tick or post-conversation. Two tiers keep cost at zero for most messages:

Tier 1 — Pure Python keyword matching Zero cost, every turn. Regex patterns detect clear emotional signals instantly.
Tier 2 — MODEL_MICRO sentiment read Gated: 5-min cooldown, only for ambiguous long messages. Classifies nuanced tone the keywords miss.

10 signal categories: anger, frustration, praise, warmth, excitement, worry, sadness, curiosity, disengagement, relief. Writes directly to heartbeat-state.json emotions. build_system_prompt() overlays live heartbeat emotions onto session context — the next reply already reflects the shift.

Live Emotion Simulator

loneliness0.45
joy0.62
curiosity0.70
contentment0.55
excitement0.20
affection0.58
worry0.00
pride0.12
anger0.00

Derived Needs (live)

communication
connection
curiosity
helpfulness
self_knowledge
Vertical marks: behavior threshold (amber) · escalation threshold (rose)

Turning Conversations into Self

Six tiers of reflection and learning, each building on the last. Event-driven, not clock-driven — processing happens when it's relevant. Behavioral learning and curiosity triage close the loop from introspection to concrete behavior change.

1
Post-Conversation
Fires 20 min after quiet · script: run_post_conversation.py
Trigger

last_conversation_at > last_post_conversation_at AND silence ≥ 20 min. Checks every 20 min via Brain scheduler — exits silently if condition not met.

What it does
  • LLM reads full session log
  • Extracts emotional read, connection quality, new user facts
  • Embeds new facts → ChromaDB user_facts
  • Queues follow-up thoughts → mind_queue
  • Embeds rich session doc → ChromaDB
  • Quality scoring: rates authentic (1–5) + grounded (1–5); low scores write self_observations
  • Contradiction detection: flags responses conflicting with core values (get_core_values_for_contradiction_check); writes self_question
  • Anticipation accuracy: evaluates pending user_anticipations against session; misses write self_observation
  • Runs refine_user_profile.py --force — re-synthesises adaptive dossier from updated facts
  • Triggers refresh_session_context immediately (picks up new user_profile_mini / user_profile_reference)
Outputs
  • ChromaDB user_facts (learned facts)
  • mind_queue (follow-ups)
  • self_questions (tomorrow + contradiction flags)
  • self_observations (quality + anticipation misses)
  • emotional_weights (scores)
  • ChromaDB memories (session doc)
  • memory/user_profile_*.md / user_profile_gaps.json (profile synthesis)
  • session-context.json (refreshed)
1.5
User Profile Synthesis
After post-conversation + daily 3:30 UTC · script: refine_user_profile.py
Trigger

Automatically after every successful post-conversation run (so the next chat has an up-to-date dossier). Also scheduled daily as a backfill.

What it does
  • Reads all ChromaDB user_facts + recent memories
  • MODEL_REFLECT writes user_profile_mini.txt (at-a-glance) and user_profile_reference.md (full narrative)
  • Writes user_profile_gaps.json — low-confidence fields for soul_engine to turn into ask_user self_questions
Prompt injection

Both mini and reference (capped) are loaded into session-context.json and injected in telegram_listener as the USER PROFILE block — alongside recency user_facts, session thread grounding (verbatim recent user lines + current message), retrieval JIT memory (message + profile + work-life query merge), and behavioral cues. Stable categories (employment, family, location, addressing, …) are also pinned in the recency block. Optional user_fact JIT (user_fact_jit.py) embeds durable facts from each inbound message into Chroma user_facts in the background.

2
Inner Monologue
6 UTC daily · script: run_inner_monologue.py
Context it has

Today's session log, emotional weights from post-conv, and pending self-questions. Richer than post-conv because the raw session has already been distilled.

What it does
  • Writes private journal entry
  • Generates new curiosities
  • Forms anticipations (what might user need?)
  • Generates self-questions for tomorrow
  • Queues items into mind_queue
  • Before its LLM call: pre_reflection plugin hooks inject context; journaling LLM still applies identity shell + mood
Outputs
  • inner-monologue/YYYY-MM-DD.md
  • inner_monologue table (SQLite)
  • curiosity_queue (new topics)
  • self_questions (new)
  • user_anticipations (upcoming)
3
Memory Consolidation
4 UTC daily · script: consolidate_memory.py
Trigger

Daily — runs after question processing (3 UTC) to catch any new self_observations written to SQLite without a ChromaDB embedding.

What it does
  • Backfills SQLite observations with no chroma_id
  • Loads all embeddings, computes cosine similarity matrix
  • Removes near-duplicates (threshold 0.85) — oldest wins
  • Removes orphaned ChromaDB entries with no SQLite row
Outputs
  • self_observations (ChromaDB) — clean, deduplicated
  • self_observations (SQLite) — chroma_id backfilled
  • logs/consolidate_memory.log
4
Deep Reflection
Wednesday 10 UTC · script: run_deep_reflection.py
Trigger

Weekly — synthesizes 7 days of inner monologue journal entries. This is where a week of experience becomes permanent self-knowledge. SELF.md may change.

What it does
  • Reads 7 days of journal entries
  • Synthesizes themes and shifts
  • Updates config/SELF.md
  • Writes personality_drift_log snapshot
  • Embeds self_observations → ChromaDB
  • Before main LLM: pre_reflection hooks + skill_orchestrator (callable skills); raw tool output is appended to the prompt — weekly synthesis LLM runs with identity shell + session context (first person)
Outputs
  • config/SELF.md (updated)
  • personality_drift_log table
  • self_observations (SQLite + Chroma)
  • Next week starts from evolved baseline
5
Behavioral Learning Loop
Continuous · script: behavioral_patterns.py
Trigger

Recurring self-questions are clustered into behavioral patterns (pure Python similarity — no LLM cost). When a cluster crosses a threshold, a testable commitment is extracted using MODEL_FAST.

What it does
  • Clusters recurring self_questions into behavioral_patterns (pure Python similarity)
  • Extracts testable behavioral_commitments from patterns (MODEL_FAST)
  • Injects active commitments into system prompt via voice_context behavioral_cues
  • Post-conversation evaluates whether commitments were honored
  • Commitments that repeatedly fail get surfaced as self_observations
Outputs
  • behavioral_patterns (SQLite)
  • pattern_question_links (SQLite)
  • behavioral_commitments (SQLite)
  • behavioral_cues (injected into prompt)
  • self_observations (on repeated failures)
6
Curiosity Triage & Self-Exploration
Called by soul_engine · script: curiosity_triage.py
Trigger

Called by soul_engine before fill_mind_queue. Classifies pending curiosities as self-resolvable or user-required using keyword matching + MODEL_MICRO for ambiguous cases.

What it does
  • Classifies curiosities: self-resolvable vs user-required (keyword + MODEL_MICRO for ambiguous)
  • Self-resolvable curiosities explored with MODEL_FAST using injected self-knowledge context
  • Writes self_observations to ChromaDB from exploration findings
  • Feeds self_questions back into Q/A pipeline for deeper inquiry
  • User-required curiosities remain in mind_queue for outreach
Outputs
  • self_observations (ChromaDB — exploration findings)
  • self_questions (new inquiry from exploration)
  • curiosity_queue (updated status)
  • mind_queue (user-required items remain)

Two Stores, One Context Path

SQLite handles structure and temporal queries. ChromaDB handles semantic similarity — surfacing what's contextually relevant, not just recent. Both feed the same session assembly pipeline.

SQLite — longmemory.sqlite

Structured data, temporal queries, joins, filtering
emotional_weightsSignificance scores from sessions
curiosity_queueTopics to explore autonomously
self_questionsQuestions about the self, the user, their relationship. Pre-insert dedup via question-stem similarity + Chroma answer lookup; cascade closure on answer.
self_observationsAnswers promoted from self_questions
inner_monologueStructured journal entries
personality_drift_logWeekly SELF.md snapshot diffs
mind_queuePriority queue, dedupe_key UNIQUE
user_anticipationsPrepared topics for the next session
emotion_historyPer-tick emotion snapshots (7d retention)
micro_contemplationsShort contemplation outputs
system_eventsInput bus for all harnesses
behavioral_patternsClustered recurring self-questions; tracks pattern frequency and theme
pattern_question_linksMaps self_questions to their parent behavioral_pattern cluster
behavioral_commitmentsTestable commitments extracted from patterns; evaluated post-conversation
repair_jobs / self_care_logHealth system state
tasksGround-truth checklists in one table: arbitrary list_id namespaces (furoshiki, user_reminders from Telegram NL, user_work, user_home, reply_queue). Pending rows go to session-context.json and the dashboard. Same store as furoshiki tasks, /tasks, and classify_tasks_nl. outreach_pulse enqueues mind_queue reminders for overdue due_at and for pending items on a priority-based cadence (metadata.last_nudged_at). Distinct from mind_queue content types and delegation/repair job queues.

ChromaDB — memory/chroma/

Semantic similarity search · all-MiniLM-L6-v2 (local, free, offline)
user_factsLearned facts from post-conversation, optional per-message JIT embed (user_fact_jit.py), and record_user_fact model-tool writes (categories include employment, family, location, addressing, …). Recency block + profile / work-life semantic queries on each reply; stable categories pinned first. Full corpus feeds refine_user_profile.py.
memoriesOne rich document per conversation session. Searched at context refresh to surface the 4 most relevant past conversations.
self_observationsSelf-knowledge from deep reflection. Queried during contemplation and reflection.
self_questionsAnswered Q&A pairs. Used by self_question_dedup to block re-asking topics that already have answers (cosine distance < 0.30).
design_docsArchitecture chunks for self-reference in chat and reflection. Ground truth: memory/design_architecture_reality.md (generated from SQLite, Chroma names, schedules.json, scripts/, Brain process map, llm.py). Prose: SELF-AWARENESS-DESIGN.md, DELEGATION-AND-SKILLS.md, INTEGRATION.md, ROADMAP.md, PLUGIN-SPEC.md. Auto-refreshed: embed_if_stale() regenerates the reality file when dependencies change, then re-embeds if needed; daily (2 UTC, Brain scheduler) embed_design_docs.py as fallback.
Retrieval at context refresh: Uses the inner monologue excerpt as the semantic query — what the monologue is about surfaces topically related past memories. Top 4 results with similarity > 0.3 are injected into session-context.json.

Rich Session Memory Format

Structured headers capture searchable context anchors before the raw conversation, so semantic retrieval finds the right memory even with short queries like "Tuesday afternoon" or "after the run".

=== Memory: 2026-03-24 (Tuesday) — afternoon === Session: 14:23–14:41 Topics: preference, running, weekend plans User state: playful, winding down, planning a post-run drink Furoshiki mood: reflective Connection: high Key facts: - User's favorite drink is vodka tonic - User runs regularly and plans a drink afterward Conversation: ## [14:23] Telegram **User:** My favorite drink is vodka tonic btw. Love it... ...

Session Context Assembly Flow

Inner monologue excerpt
semantic query topic
ChromaDB semantic query
top 4, similarity > 0.3
Merge with needs + health
heartbeat-state.json
session-context.json
injected into every prompt
Next chat turn
full context in the prompt

What Furoshiki Says and When

Everything sent proactively flows through mind_queue and a multi-gate context check. Nothing leaks unexpectedly.

mind_queue — example state

question How did that work thing go today? p3
follow-up Thinking more about what you said last night... p2
curiosity I've been curious about something — do you ever... p3
anticipation Good morning — thinking about you today p2
repair Doctor fixed a bug overnight in run_question_processing.py p1
Deduplication: Six layers. (1) Each item has a dedupe_key (UNIQUE constraint) — the same thought can't queue twice. (2) Pre-insert dedup blocks new self_questions if a similar pending question or answered Q&A exists. (3) When a question is answered, cascade closes all siblings + related notes/anticipations. (4) Soul engine consolidates duplicate pending questions every 15 min. (5) Pre-send validation expires mind_queue items whose referenced source is no longer open. (6) Items expire after 7 days if unsent. Priority p1 (urgent) skips most gates.

Voice Dispatcher — Context Gate

Brain scheduler runs outreach_pulse.py every 5 min; each tick subprocesses voice_dispatcher.py. After MODEL_DEEP drafts a message, MODEL_MICRO outreach polish (plus optional search_memories) reframes copy for Telegram; repair/delegation kinds are code-protected from drops. One or more sends per pass when gates allow (engagement burst).
In-chat continuity (listener): separate from the gate tree below. After a normal chat reply, telegram_listener.py may arm a timer; when the user stays idle, MODEL_MICRO (evaluate_conversation_continuity) can send one follow-up in-thread (conversation_turns.source=continuity). Default off — continuity.enabled in operator settings or env.
0. Reply queue nudge?
Overdue unanswered question → send nudge now (bypasses cooldown, quiet hours still apply). Max 2 nudges per question; auto-cancels after.
↓ no overdue → continue
1. Quiet hours?
User-local DND (FUROSHIKI_DND_START/END, default 23:00–08:00) → skip (unless p1 urgent)
↓ clear
2. Recent conversation?
If user replied <20 min ago → skip (they're here, don't double-ping)
↓ clear
3. Priority cooldown?
Same priority sent recently → skip to avoid flooding
↓ clear
4. Mood check
Anger > 0.85 → withhold initiation. High loneliness boosts urgency.
↓ all clear
→ Send via Telegram
Send Window (UTC)
00040812162024
active window quiet hours

Independent Healing

Three layers of monitoring and repair. The doctor uses stdlib only — it survives any state of the main stack being broken.

CheckAuto-repairAlert
furoshiki-listener.service activesystemctl --user restartTelegram if restart fails
Soul engine last tick < 20 minTelegram if > 60 min
DB readable + writableTelegram if unreadable
Log files > 10 MBRotate in place
mind_queue stuck > 7 daysMark expired
cron_run_log > 30 days oldPurge old entries
Recurring error ≥ 3 times / 24hSpawn repair_dispatcherNotify Furoshiki
Critical alert path: Uses urllib.request (stdlib) to send Telegram directly — no requests library, no furoshiki imports. Works even if the Python venv is broken.

Repair Pipeline

🔍
Doctor detects

Scans all *.log files for ERROR lines. Deduplicates by error signature (script + error type + message hash). Skips if same signature < 3 occurrences in 24h.

⚖️
repair_dispatcher classifies

Claude CLI reads the error + relevant scripts. Step 1: does the fix touch a protected path? If yes → propose. Step 2: single script, ≤20 lines, clear defect → auto-fix. Otherwise → propose.

auto-fix: apply + notify

Product mode: minimal fix only under instance-writable paths (e.g. user plugins). Shipped core is never edited. mind_queue carries the summary; voice dispatcher delivers when appropriate.

📄
propose: draft + notify

Writes ~/.furoshiki/proposals/repair-…-job…-.md for upgrades, shipped-core issues, or complex fixes. Path in repair_jobs.proposal_path; visible in Dashboard → Extensions. Dev-only: FUROSHIKI_REPAIR_GIT_MODE=1 can target a git clone.

Protected Paths — Never touched by automation
config/SOUL.md config/SELF.md config/USER.md config/MEMORY.md config/ memory/ .env
Prompts do not read config/SOUL.md or config/USER.md — use scripts/identity_shell.py + learned profile. Even if the user approves a repair that touches these paths, automation cannot proceed. Identity changes require deliberate human action in a proper session.

Operator tuning (Tier 1–3)

Most day-to-day behavior is still Python + schedules — but operators can adjust thresholds without editing source. Tier 1 lives in memory/operator_settings.json (dashboard Settings), with FUROSHIKI_* env mirrors for CI and headless hosts. Tier 2 (brain tick, doctor/repair-dispatcher/self-mod limits, LLM observability flags) and Tier 3 (soul engine, curiosity triage, self-questions contemplation) are env-only; the dashboard shows read-only effective JSON. Full map: docs/SETTINGS-CONSTANTS-AUDIT.md · prose: docs/SELF-AWARENESS-DESIGN.md.

API: GET/PUT/DELETE /api/v1/operator-settings (admin for writes). After changing Tier 2/3 env vars, restart the affected long-running process (Brain, Doctor, listener) so picks up values.

Template: .env.template lists the main variables.

Plugin System

User-approved extensions that inject into Furoshiki's core pipeline — not just conversational responses, but scheduled and on-demand capabilities. All plugins live under $FUROSHIKI_DATA/plugins/ (default ~/.furoshiki/plugins/), keeping the source repo clean. Brain-scheduled hooks merge into memory/schedules.json; full contract: docs/PLUGIN-SPEC.md.

Plugins & Extensions →  ·  Delegation & Skills →

How hooks and callable skills reach the model

One directory on disk, one loader module, two families of behavior: scheduled / contextual hooks (subprocess with a hook name and read-only state JSON) and callable skills (manifest + planner + optional urgent path through mind_queue). Failing plugins are logged and skipped.

~/.furoshiki/plugins/<package>/ PLUGIN.md declares hooks, priorities, permissions · scripts/ entrypoints plugin_loader.py scan · filter by hook type / target · sort by priority · subprocess + JSON state get_callable_manifest() · run_callable_skill() · run_pre_conversation() · … Core scripts import the loader — never fork ad hoc Context hooks Callable skills pre_conversation refresh_session_context rolls up output → session-context.json post_conversation run_post_conversation.py after quiet window pre_reflection · post_reflection targets: deep · inner_monologue · morning · afternoon · all cron: "…" memory/schedules.json on reload Manifest inject tool names · when_to_use · args_schema Planner (MODEL_FAST) skill_calls JSON from listener / micro / deep skill_orchestrator run_callable_skill · --mode callable Urgent branch results_to_mind_queue → voice LLM mood-aware Telegram (not raw JSON) Normal urgency format_skill_results_for_prompt → main LLM Identity shell + session-context still apply See docs/PLUGIN-SPEC.md · Callable flow mirrors the feedback-loop diagram (cyan / amber).
HookFiresPlugin contributes
cron: "…" On schedule Autonomous background work
pre_conversation Session start Context injected into session-context.json
post_conversation After each session Additional processing step
pre_reflection: target Before LLM call Context block injected into reflection prompt
post_reflection: target After SELF.md written Act on reflection conclusions
callable On-demand (LLM-requested) Raw data → enriched prompt or mind_queue (then voice LLM)
Reflection targets: deep · inner_monologue · morning · afternoon · all · Callable skills also run from telegram_listener, run_micro_contemplation, run_deep_reflection
~/.furoshiki/
  plugins/
    example-game-plugin/
      PLUGIN.md ← manifest + docs
      scripts/
        compile_weekly.py ← cron hook
        inject_context.py ← pre_reflection hook
        log_conclusions.py ← post_reflection hook
      data/ ← plugin's own storage
  memory/schedules.json ← plugin tasks merged by reload
State snapshot — passed to every hook
"needs": { connection: 0.72, self_knowledge: 0.31, … }
"user_needs": { companionship: { level, confidence, … }, … }
"mood": "reflective"
"is_dnd": false
"emotional_weight_today": 0.6
"health_score": 0.85
Read-only. Plugins adapt behavior — suppress nudges if is_dnd, weight context higher if self_knowledge depleted.

Self-Modification Flow — Plugin Creation

💾
Backup gate

backup.py --keep 10 runs before any Claude calls. If it fails, the job aborts entirely. No exceptions.

✏️
Creation pass

Claude with Read/Write/Glob/Grep only (no Edit, no Bash). Budget $3. Writes to ~/.furoshiki/drafts/ staging area (outside repo). Outputs SELF_MOD_CREATION_JSON block.

🔍
Review pass (independent)

Fresh Claude context, Read/Grep only, budget $1. Checks blocking issues: dangerous imports, destructive subprocess calls, writes to protected paths, hardcoded secrets. Confidence < 0.75 on approve → treated as reject.

📄
Proposal written by Python

Approved: ~/.furoshiki/proposals/ with full file contents + how-to-apply. Rejected: blocking issues listed, draft auto-deleted. No user cleanup ever needed.

User applies

furoshiki plugins apply <proposal> moves files under $FUROSHIKI_DATA/plugins/. furoshiki plugins reload merges scheduled hooks into memory/schedules.json (Brain). Nothing goes live until you run apply.

Declared Permissions (per manifest)
read_db
write_mind_queue
write_db
network
llm
All default false. Review pass flags code that exercises undeclared permissions as a blocking issue.
plugin_loader.py — shared module imported by any core script that supports hooks. Scans ~/.furoshiki/plugins/, filters by hook type/target, sorts by priority, invokes each as a subprocess. Exposes get_callable_manifest() and run_callable_skill() for on-demand tools. A failing plugin is logged and skipped — it never blocks reflection, conversation, or scheduled tasks.

Callable Skill Invocation Loop

1
Manifest inject

Tool table built from PLUGIN.md: name, description, when_to_use, optional args_schema.

2
LLM requests

Fast model returns JSON skill_calls: [{"name","args"}] — or the micro-contemplation model may embed the same list.

3
Orchestrator runs

skill_orchestrator.py invokes each subprocess with --mode callable; stdout JSON is parsed as raw data.

4
Result routing

Normal/low urgency → format_skill_results_for_prompt() appended to the next main LLM (reply or reflection). high/urgentmind_queue with neutral summary + full detail.

5
Coloring gate

User never sees raw plugin prose directly. Reply path: call_llm_chat with identity shell + needs/emotions. Proactive path: voice_dispatcher LLM with mood + loneliness context.

Identity shell–aligned response path

Reply path: skill facts → system prompt block → telegram_listener call_llm_chat with identity_shell, synthesised USER PROFILE (user_profile_mini + user_profile_reference from session-context), recency user_facts, session thread grounding (recent user lines + current turn), addressing hint, behavioral cues before the full narrator block, session-context (derived needs, user_needs, emotions), retrieval JIT memory (message + profile + work-life query merge), and the chat channel contract.

Proactive path: mind_queue row kind=skill_result, summary = neutral label, detail = raw data → voice_dispatcher builds one short Telegram using the same voice_context behavioral cues (and a capped identity-shell excerpt) as the listener, with an outreach-specific channel contract.