Foundation Layer

Primitives & Trust Modules

The licensable building blocks behind Sparse Supernova. These foundations create sparse apps and solutions — from monitoring and predictive systems to trust-heavy deployments — with Novas available as an optional governed runtime layer above them.

Six core primitives handle representation, selection, comparison, anomaly detection, scaling discipline, and transport. Alongside them sit trust, identity, and vault modules for controlled and auditable deployment.

Core Architecture

Why We Built a New Foundation Layer

Most AI today is built on dense vectors and heavy movement of data. That makes systems expensive to run continuously, difficult to govern tightly, and wasteful in energy terms. Sparse Supernova took a different route: keep the meaning, lose the waste, and make the foundation licensable.

Dense systems can work in a few large data centres. They are the wrong starting point for world models that must run continuously, everywhere, and for years. Sparse Supernova was born from a simple requirement: keep the meaning, lose the waste.

These primitives are not just internal techniques. They are the foundation layer from which sparse applications and trust-aware systems are built.

✕ Dense AI

Most values are active. Everything is moved and processed on every step.

★ Sparse Supernova

Most values are inactive — by design. Only the bright points matter.

The energy cost in computing mainly comes from two things: moving data (reading and writing values in memory) and doing maths (multiply-add operations). In Sparse Supernova, the system ignores the zeros and computes only on the small set of active non-zeros — reducing memory traffic and operations over time.

Reference

Six Core Primitives at a Glance

Together, these six primitives form the technical core of the Sparse Supernova foundation layer. They are typically combined with trust, identity, and vault modules to create sparse apps and commercial solutions.

# Primitive What It Does Why It Matters Energy Win
1 Sparse Signatures Represent the world State without dense embeddings Store / move less
2 Elevation-Shaped Hashing Decide what lights up Selective sparsity with control Keep density low
3 Sparse Distances & Anomalies Measure change / novelty Always-on monitoring Compute on non-zeros
4 Conformal Sparse Detection (USAD) Decide "is this unusual?" Lightweight safety layer Always-on, low cost
5 Universal Saturation Law (USL) Know when to stop Trust vs cost sizing Avoid over-build
6 Smart-Atom Router Move sparse safely Distributed world models Fewer bytes + governed
Architecture

How the Foundation Layer Becomes Products

Sparse Supernova is built in layers. The primitives and trust modules form the licensable foundation layer. Sparse apps and vertical solutions are built from those components. Where orchestration, memory, routing, governance, workflows, or receipts are needed, Novas can sit above them as an optional governed runtime layer.

↳ Short version

Primitives and trust modules build the sparse apps. Novas can sit above them as an optional governed runtime layer.

Primitive 1 Sparse Fingerprints

Sparse Signatures

A sparse signature is a compact "fingerprint" of a situation. Instead of filling a vector with thousands of non-zero numbers, we store only a small set of important features in a very large space. Most entries are exactly zero.

Think of it like a night sky: mostly dark, with a few bright points that matter.

Human View

A compact fingerprint that represents a situation using only a small set of important features in a very large space. Text, images, audio, and numerical streams can all be encoded into a common sparse space.

Architect View

  • High-dimensional space (large number of possible positions)
  • Only a tiny fraction of indices are non-zero; the rest are hard zeros
  • Each active index has an elevation encoding presence and importance
  • Text, images, audio, and numerical streams encoded into a common sparse space using deterministic rules

↳ Why It Matters in the Foundation Layer

Sparse applications and world models carry state continuously. Dense embeddings make that state expensive to store, expensive to move, and expensive to compare. Sparse signatures make long-running state realistic: most of the world stays "off" until it matters.

Primitive 2 Deterministic Selection

Elevation-Shaped Hashing

If sparse signatures are a map, elevation-shaped hashing is how we decide which points appear — and how strong they are. We don't place features randomly; we shape importance so the system stays sparse while still capturing what matters.

Human View

Elevation is strength/importance: term strength in text, edge intensity in images, event strength in signals. This primitive shapes which features "light up" and how brightly, keeping sparsity high without losing structure.

Architect View

  • Deterministic hashing from features → positions in high-dimensional space
  • Elevation shaping from robust statistics and/or learned weights
  • Top-k selection that keeps only the strongest signals
  • Collision handling that favours more informative features
  • Strict density constraints so we never accidentally "turn everything on"

↳ Why It Matters in the Foundation Layer

Sparse systems fail in two ways: they either waste energy by activating too much, or they lose meaning by throwing away the wrong signals. Elevation-shaped hashing is the control that keeps sparsity high without losing structure.

Primitive 3 Sparse Comparisons

Sparse Distances & Anomalies

To understand change, you need to compare "now" to "before". Our sparse distance and anomaly measures do that using only the few active positions — not the whole vector.

Human View

A set of comparison tools that measure how different "now" is from "before" — using only the small number of active positions, not every single value. This enables continuous monitoring without overwhelming compute costs.

Architect View

  • Sparse cosine similarity / sparse distances over signatures
  • k-nearest-neighbour scoring in sparse space
  • Multi-timescale novelty traces (short / medium / long windows)
  • Computed on sparse structures, touching far fewer values per operation

↳ Why It Matters in the Foundation Layer

A sparse monitoring system is only useful if it can monitor change continuously. Dense comparisons are too expensive to run all the time. Sparse distances keep the monitoring always-on without dragging energy costs up with it.

Primitive 4 Statistical Guardrail

Conformal Sparse Detection (USAD)

This primitive answers: "Is this unusual enough that I should care?" — and it does it with a clear statistical control layer rather than a vague score.

Human View

A lightweight guardrail that can run everywhere the world model runs. It provides a clear yes/no answer with statistical rigour — not a vague anomaly score — so the system knows when something genuinely unusual has occurred.

Architect View

  • Universal anomaly detector built on sparse signatures
  • Robust statistics → k-NN nonconformity → conformal quantiles for thresholds
  • Controls false alarms when calibration assumptions hold
  • Small, dependency-light module that runs in browsers, edge devices, and servers

↳ Why It Matters in the Foundation Layer

If a sparse application runs everywhere, it needs a guardrail that can run everywhere too. USAD is designed to be always-on without turning safety into a second heavy system.

Primitive 5 Scaling Law

The Universal Saturation Law (USL)

USL is our law of scaling and agreement. It describes how agreement improves as you add effective dimension (or independent checks), and it gives a practical outcome: when to stop scaling so you don't waste energy.

Human View

A design tool that tells you: beyond a certain point, extra scale buys very little and costs a lot. Size for the trust you need, then stop. This is how we avoid the "scale until budgets run out" trap.

Architect View

  • Agreement rises with effective dimension (signature dimension, ensemble width, validator count, replica count, etc.)
  • A drift/ambiguity parameter captures how hard the environment is
  • Provides a stopping rule: beyond a certain point, extra scale buys little and costs a lot

↳ Why It Matters in the Foundation Layer

Most systems scale until budgets run out. Sparse systems need a more responsible approach: size for the trust you need, then stop. USL is a design tool for doing that deliberately.

Primitive 6 Transport Primitive

Smart-Atom Router

Sparse isn't only about how we represent the world. It's also about how we move information through a system. In real deployments, energy is burned in data movement: storage, bandwidth, and network hops. The Smart-Atom Router is our transport layer for sparse payloads — move only what matters, not everything.

Human View

If you represent the world sparsely but move it densely, you lose the benefit. The Smart-Atom Router keeps sparsity end-to-end: representation, comparison, monitoring, and transport — so the energy savings carry through the entire system.

Architect View

  • Real-time routing for sparse "atom" packets (including streaming modes)
  • Cryptographic integrity and provenance support
  • Receipt logging so flows are traceable and auditable
  • Built to treat bandwidth and energy as first-class constraints

↳ Why It Matters in the Foundation Layer

Sparse systems are often distributed. If you represent the world sparsely but move it densely, you lose the benefit. The Smart-Atom Router keeps sparsity end-to-end: representation, comparison, monitoring, and transport.

Trust Layer

Trust, Identity & Vault Modules

Alongside the six core primitives sit trust, identity, and vault modules that support controlled deployment, provenance, integrity, encryption, and custody. These modules are part of the licensable foundation layer and are used where sparse systems need security, auditability, or commercial-grade control.

Not every deployment needs every trust module. They are combined as required by the application, customer environment, and governance needs.

The foundation layer is designed to be licensable, composable, and commercially deployable.

Say less. Do more. Prove it.