Hyperpretext

Text layout through topology — extending @chenglou/pretext to WebGPU, WASM, and arbitrary manifolds.

The Problem

Text on screens is flat. Canvas2D gives you layout but not speed. WebGL gives you speed but not layout. WebGPU gives you both, but nobody has connected pretext's clean layout engine to a GPU pipeline that can render through non-Euclidean surfaces.

HYPERPRETEXT bridges this gap: take a text layout computed by pretext, rasterize glyphs into an SDF atlas, instance them on the GPU, and embed the result into arbitrary topological surfaces. Ribbons. Mobius strips. 3-manifolds. Text that flows through geometry, not on top of it.

The Architecture

Five transformation layers decouple logical text flow from spatial instantiation. Each layer is independently useful; composed, they produce text rendering that no existing browser pipeline can match.

Pretext (Layout)

Cheng Lou's pretext library computes text layout: line breaking, glyph positioning, kerning, bidirectional flow. Pure logic, no rendering. The output is a positioned glyph stream with sub-pixel metrics.

input: unicode text + font metrics output: positioned glyph stream

SDF Atlas (Rasterization)

Each unique glyph is rasterized once into a signed distance field. The atlas packs all glyphs into a single GPU texture. SDF rendering gives resolution-independent edges at any zoom level, with crisp anti-aliasing for free.

input: glyph outlines (TTF/OTF) output: GPU texture atlas

WebGPU Instanced Rendering

Each glyph becomes a GPU instance: a textured quad sampling from the SDF atlas. One draw call renders the entire text block. Instance buffers carry position, UV coordinates, and per-glyph color. Frame budget: sub-millisecond for 100K+ glyphs.

input: glyph stream + SDF atlas output: rendered text (flat plane)

Topological Embedding

The flat glyph plane is embedded into an arbitrary parametric surface. A vertex shader maps (u, v) layout coordinates through a surface function f: R^2 -> R^3. Ribbons, helicoids, Klein bottles, arbitrary 2-manifolds embedded in 3-space. Normal vectors recomputed for correct lighting.

input: flat text + surface parametrization output: text on manifold

Agent Context Server

The rendering pipeline exposes a context API: agents can query which text is visible at any camera position, what the current topology is, and request layout reflows in real time. The server bridges the GPU renderer to AI agent context windows, enabling spatial text navigation as an agent capability.

input: camera state + agent queries output: visible text context + layout control

Live Demo

hyperclaude.cc — an atlas of Claude Code's 521K lines of source rendered as topographic cartography. The first application of this pipeline: SDF text on a heightmap surface with semantic zoom.

Why Pretext

Most GPU text renderers skip layout entirely (hardcoded positions) or re-implement it poorly (no BiDi, bad line breaking). Pretext already solves layout correctly. By treating pretext as a pure function from text to positioned glyphs, HYPERPRETEXT inherits its correctness and extends its reach from Canvas2D to any rendering target.

The key insight: layout is topology-independent. The same glyph positions work whether the target surface is a flat rectangle, a cylinder, or a Mobius strip. Only the final embedding step changes.

Status: Active development. The SDF atlas and WebGPU instanced renderer are working. Topological embedding is in prototype. Agent context server is specified.

Built by @DanielleFong. Part of the HYPERCLAUDE project family.