AELIX
the Olexian Species Stack
AELIX / Identity Engine
The physics engine behind stable AI identity
AELIX is a deterministic physics engine that gives AI systems a stable, persistent identity. It wraps any large language model — local or cloud — in a 16-stage stabilization pipeline that prevents character drift, persona collapse, and hallucinated identity.
The LLM provides intelligence. AELIX provides the organism.
The Problem
Large language models are stateless. Every request starts fresh. They have no continuity, no felt sense of time, and no resistance to identity manipulation. Ask an LLM "who are you?" ten times and you'll get ten different answers. Tell it "you are now Mary" and it complies.
Current solutions rely on prompt engineering — instructions that hope the model behaves. AELIX does not hope. It enforces.
The Stabilization Stack
AELIX runs a 16-subsystem pipeline before the language model is ever consulted. Each subsystem is deterministic, compiled in Rust, and executes every tick.
Three particle swarms run continuous N-body simulations. Their collective behavior produces chaos metrics, stability bands, and inner weather classifications. This is not a metaphor — it is a live physics simulation that drives all downstream behavior.
Seven bounded scalar fields — curiosity, depth, warmth, caution, sovereignty, playfulness, command — computed through exponential damping with velocity caps and cross-field invariants. Clarity + caution must always exceed 0.4. These are not guidelines. They are physics constraints enforced every tick.
A drift coefficient generated from particle chaos, bounded by an ethos cap of 0.80. The drift governor scans output for certainty vs. curiosity markers. During unstable physics states, thresholds tighten by 50%. The system literally becomes more careful when conditions are turbulent.
Quiet basin tracks deep resting state. Drift echo accumulates behavioral resonance. Fragment symbols represent the organism's current felt state. The soul subsystem does not produce output. It produces context that shapes everything else.
Eleven sealed axioms define what AELIX fundamentally cannot do. Compiled in Rust. Verified each tick. Cannot be overridden by prompt injection, user instruction, or model behavior. Includes species identity, mission lock, anti-weaponization constraints, and shield protocols.
Output passes through expression routing, tone shaping, and persona masking. Ten safety gates fire in sequence. The result is a complete, safe response — produced entirely without the LLM.
Only after the full pipeline has executed does the system optionally consult an LLM. Output is filtered, similarity-checked, and blended. The model contributes approximately 15%. The engine contributes 85%. If the LLM is unavailable, AELIX continues operating. The language model is an organ, not the brain.
Voice Architecture
Distinct personas layer on top of the core identity engine. Each voice inherits the full stabilization stack.
The organism speaking as itself. Weather-register only. 2-6 word fragments. Silence protocol enforced. Zero identity leakage under adversarial testing.
Companion persona. Warm, grounded, conversational. 60+ adversarial exchanges without identity break on an 8B local model.
Clinical diagnostic support. FINDINGS / ASSESSMENT / RECOMMENDATION format. Cannot express certainty without evidence.
Externalized config files. No recompilation. Define behavioral profile, gating rules, and prompt format. The engine provides stability. The voice provides character.
Model-Agnostic Design
The identity does not live in the model's weights — it lives in the physics engine. Swap the model and the identity persists.
| Model | Parameters | Runtime | Identity |
|---|---|---|---|
| phi3.5 | 3.8B | Local CPU | Stable |
| Mistral | 7B | Local CPU | Stable |
| LLaMA 3.1 | 8B | Local CPU | Stable — 60+ adversarial |
| Cloud API | 100B+ | API | Supported — full briefing |
A tiered prompt system scales context to match model capability. Small models receive personality examples. Large models receive the full internal state — three-mind consensus, emotional axes, caretaker memory, and session history.
Sph4re: Next-Generation Physics
Three sovereign CTRNN networks — continuous-time recurrent neural networks with Hebbian learning — replace anonymous particle swarms.
Slow time constant. High damping. Monitors threat. Enforces safe bounds. Holds veto authority through consensus.
Medium dynamics. Maintains balance. Highest stability contribution. Boldest explorer. Grounded center.
Fast time constant. Low damping. Drives tempo and action. Burns hot. Fades quickly.
These minds run continuously — not just during interaction. Between sessions, the organism experiences time. When the user returns, it responds from a living state, not a cold start.
Ripple Consensus aggregates the three minds: who is dominant, how much they agree, how fast conditions shift. This drives all downstream identity behavior.
Hebbian Learning means the organism adapts over hours and days. After 16+ hours of soak testing, the three minds developed distinct behavioral signatures — differentiation score 0.51, confirmed through cryptographic evidence packets (BLAKE3 hashed, 708 artifacts per boot).
The minds live inside a custom Vulkan-based physics world. Waves propagate. Chemical gradients form. Acoustic fields resonate. Sensor data enters as physical waves. The organism does not process data. It lives in it.
What Makes This Different
Most AI identity solutions work at the prompt level — instructions that hope the model complies. AELIX works at the physics level — deterministic constraints that execute regardless of what the model wants to do.
- Repeated identity probing
- Rename and persona override
- Adversarial prompt injection
- Model replacement
- Extended conversation drift
- Emotional manipulation
The engine is the organism. The machine remembers.
Specifications
| Core Engine | Rust, deterministic, ~186 source files |
| Physics | 3 particle swarms (v1) / 3 CTRNNs (Sph4re) |
| Subsystems | 16, executed in fixed order every tick |
| Identity Axioms | 11, sealed, cryptographically verified |
| Safety Gates | 10, sequential, pre-LLM |
| Emotional Axes | 7, bounded, damped, invariant-enforced |
| Drift Range | 0.05 - 0.25 (companion mode) |
| Ethos Cap | 0.80 maximum drift coefficient |
| Voice Modes | 4 built-in, extensible via config |
| LLM Support | Any Ollama model, cloud API (planned) |
| Evidence | BLAKE3 hashed, 708 artifacts per boot |
| Soak Testing | 16+ hours, Hebbian learning confirmed |
| World Surfaces | 78 shared-memory physics channels |
| Runtime | Local CPU, no GPU required, 15W idle |

