What Happened While I Was Quiet
A constraint system, a break, and a stretch of rabbit holes that turned into a corpus.
The last time I posted here was the end of January.
What I left behind in that post wasn’t a theory. It was a constraint system.
The nine-regime test suite and the three short infrastructure volumes (LFIS-20, 21, 22) said something specific: this is the coherence and order that cannot be denied across the nine realms. Same exponents across regimes. Completion without remainder. No regime-specific tuning. We didn’t claim to know how it all works. The claim was narrower and harder to argue with: if you want to say something works in this universe, it has to work within this constraint.
And then I let that sit.
A constraint without a derivation is a fence. It marks where you can’t go. It doesn’t tell you how to walk the field inside the fence. The fence was the achievement of January. The interior was still empty.
Then I took a break. Then, partway through the break, I thought: why not give it a shot myself? I know the constraints. I have pieces. I might as well try.
The answer was that it was much harder than I expected. Each piece I touched led somewhere I didn’t expect. The deuterium puzzle led to the pre-scaffold below F=0. The pre-scaffold led to lithium-7 sitting outside the closure ladder. Lithium-7 led to cord-rotation outflow geometry. Cord-rotation outflow geometry led to a ceiling-proximity load variable and the Mode Balance Condition surfacing as the framework’s organizing rule across every domain at once. The framework architecture got its own volume because the domain volumes kept needing it. The double-blow-out theorem turned up because the c²↔c³ numerical agreements were too close to be accident and the corpus had to say so explicitly.
Rabbit hole after rabbit hole. Most of them dead ends, some of them not. Pieces came together. Not all the way. But enough that on April 29, 2026, I uploaded twenty records to Zenodo in a single day. Every volume of the formal corpus that LFCT currently has is now archived, citable, and timestamped.
The constraint is still standing. That part of January didn’t change. What’s new is that some of the interior is now sketched in, and the sketch is on the public record.
This post is the catch-up. What’s there, what’s new, and what I’m planning to do here on the blog over the next year.
The Cannonball Run
Twenty Zenodo records is a lot, I assure you I could not work that fast without AI doing almost all of the writing, but here is it sorted out.
The Core trilogy went up as one bundled record together with the Notation Guide. Three volumes — Conceptual Spine, Mathematical Framework, Physical Derivations — plus the symbol registry that locks every reserved term and operator across the corpus. These are the documents written with hindsight; they don’t follow the chronological development of the theory, they distill it.
Then the LFIS series — the Light Frame Infrastructure Series — went up as nine standalone records. Eight of those volumes were already developing through the fall of 2025; two are new since the January post: LFIS-30 on framework architecture (the dual-layer / three-contract structure that organizes how Forcing and Representation operate at the framework level), and LFIS-31 on nuclear binding from cadence closure. The Series Guide went up alongside as a structural map of the 31-volume series — orientation, not derivation, for anyone trying to figure out where to start.
Ten papers went up. Some are tier 1 prediction papers (G — compact-object evaporation, K — two-fabric readout offset for the Hubble and S₈ tensions, X — galaxy uniqueness, T — hybrid damping envelope). Others are tier 2 supporting work (T2, U, V, W, Y, Z — covering envelope-extraction tests, the carrier-native CMB production model with companion data and code, unit grammar, the representability window, void-well temporal shear, and distributed nucleosynthesis). Paper U has two companion uploads: the wave-test log TL.001–027 as a data archive, and v24_production.py as a code archive.
DOIs for everything are in the project’s publish tracker. You can browse the LFCT community on Zenodo and pull whichever volumes are relevant.
What’s Actually New
If you read the January post you already know the framework. So what changed in three months?
Three things, mostly.
Nuclear binding became a domain. The framework wasn’t built to predict the binding energy curve. But the same closure structure that handles cosmology — the F=2 scaffold, the wrapping map, the contract-sphere geometry — turns out to fix the iron-peak ceiling at ε² × m_p × c² ≈ 9.63 MeV per nucleon. The doubly-magic anchors (helium-4, oxygen-16, calcium-40) sit at A_F = 4 × T_{F+1} on the balance surface, ±2.1% empirical, no fitted parameters. The post-Ca-40 climb to nickel — Ti-48, Cr-52, Fe-56, Ni-62 — closes through a reserve-fill law L(A) = (1/8)ε²m_pc²[1−(1−ε²)^{A−40}], sub-percent residuals once Ca-40 is taken as the observed start. This is what LFIS-31 covers, with the formal absorption running through Core PD.
A cosmology framework that also gets the binding-energy curve at sub-percent without tuning is a different kind of object than one that doesn’t. I plan to write about this one in detail later in the year.
The Mode Balance Condition surfaced as the framework’s organizing rule. It’s a single inequality — κ_TD F_TD(x) ⋚ κ_TS F_TS(x) — that discriminates every regime in the framework. The galaxy crossover at r_*. The CMB damping-tail floor. The electroweak crossover on the c-ladder. The Hubble-tension routing mismatch. The void/filament luminosity offset. The nuclear-binding extension/balance/closure sequence. Same rule, every domain. Six LFIS volumes carry domain instances; LFIS-30 declares it as the governing principle. I’ll come back to this on the blog — it’s the kind of unification claim that earns a post on its own.
Several earlier results got formal homes. The Three-Mode Redistribution Law (in Core MF) is now the master theorem behind the dark-energy 5/7 split, the Hubble-tension β redistribution fraction, and the gravitational-slip α(x) response — all instantiations of a single conservation rule. The Double Blow-Out theorem locks E = c³ as a structural identity (compound cost ε · C₀ = 1/c³ = ε^(3/2)), which also gives the gravitational-slip sign η − 1 = −1/c³ and the time-dilation identity ρ_F3(x) · c_local(x) = const as direct corollaries. The c²↔c³ resolution corollary explains why so many numerical agreements between c²-family and c³-family expressions fall within fractions of a percent: every ε-power carries a fabric-depth address, and the agreements are reading-equivalence consequences rather than coincidence.
A1 — the foundational axiom — was also refined. It now reads as a structural primitive: light is the balance of energy that defines coherence through time and space; each frame is the measure and universal definition of relations to all other frames. The earlier numerical-invariance form (C₀ = 1/c) is downstream now, as the cadence-constant definition.
What’s Coming on the Blog — and Why I’m Writing It
I should say something honest about why I write this blog at all.
I’m not a physicist by training. The way LFCT actually gets built is that I ramble what’s in my head to an AI and I assure you at first it sounds like nonsense. The AI comes back with a challenge and I say more nonsense but the question seems to evaporate. So many times I say “What is the question?”. Then AI translates the ramble into scientific language and formulas, and then I have to translate that back into my head — to make sure the AI got it right, that I got it right, that nothing was lost or smuggled in along the way. I am not understand a lot of what the AI is saying to me and it often forgets what was already agreed or makes errors. Without the rambling-out-loud, I’d never have gotten this far either — the AI is a much faster sounding board than my own brain alone. Honestly most of it is just saying the same thing over and over 1, 2, 3 and occasionally 4.
Most of the harder problems got solved by reframing the question rather than answering the original one. The first reframe was the framing of LFCT itself: what if “missing mass” is missing time? That single reframe is the seed of everything that followed. It keeps happening at smaller scales, too. The lithium-7 nucleosynthesis puzzle wasn’t an energy problem; it was a timing problem about cord-rotation outflow. The “many carriers” objection to a candidate operator (gravity, radiation, heat, magnetism — which one does it pick?) wasn’t a problem against the operator; it was the third missing component of the operator. The pattern is consistent: the question often needs to be replaced before any answer fits.
The blog is part of how I do that. When I write something here — slowly, in plain language, for actual readers — I have to reckon with what I actually understand. Not what the corpus says I understand. What I can defend in front of someone reading on a Sunday morning with their second coffee. That slow translation catches things the formal corpus and the AI both miss, because both can run on internal momentum without me catching up. Substack is the catching-up.
It is also how I learn physics as best I can. The corpus moves faster than I do and I don’t remember many things well. I have to make calls about what to dive deep on and what to leave for later — understand everything I just wrote, or push toward the next problem. Some weeks I’ll ramble about a corner I’m still working out for myself. Some weeks I’ll return to something I thought the AI had gotten months ago. The blog is partly for me to put it into language that I understand which is different than the non-language stuff my brain understands. The stuff that works with the thing that goes around the other thing making the new stuff for instance.
The 52-week plan I’ve drafted: Phase 1 (the next four weeks) is catch-up — a standalone intro for new readers, a personal piece on what changed between the original intuition and the formal framework, then a scorecard post walking through the public test suite and the new ten-correction CMB production model. Phase 2 (Weeks 5–16) is the big results — galaxy dynamics, dark energy, the Hubble tension, the cadence-star limaçon, lensing without halos, gravitational slip — one major piece per week. Phases 3–6 go deeper into architecture, connections, forward look, and year-end consolidation. Standalone posts on the new material — nuclear binding, the Mode Balance Condition, the framework-architecture volume — slot in where they best fit.
Each physics post links to the relevant Zenodo volume(s). If you want the formal version, you can pull it. If you want the slow walk, that’s what’s here.
Closing Note
What’s worth saying before signing off: none of this means LFCT is finished. The constraint is well-fenced. The interior is partly sketched, not fully filled. There’s a candidate three-part operator — Representation, Forcing, Carrier-Selection — that captures what the light-frame computes when it encounters mass, but only at principle level. Static gravity from TD↔TS matching closes leading order; higher-order corrections are open. The carrier-mode → physical-carrier map (why TD overflow becomes gravity, TR overflow becomes radiation, and so on) is named but not derived.
What the cannonball run does mean is that the framework now has interior — actual derivations, not only the perimeter constraint — publicly archived, citable, and falsifiable in a way it wasn’t three months ago. Anyone who wants to break it has a fixed target. Every paper includes failure conditions. The scoring is public, the scripts are public, the predictions are on the record with DOIs.
I started this Substack a year and a half ago because I was trying to explain a feeling — that what we call “missing” in the universe might not be mass at all, but time. The feeling turned into a hunch, the hunch into a constraint, the constraint into a derivation, the derivation into a corpus. The corpus is now archived.
The next thing is to make it readable.
That’s what the blog is for. See you next week.
Thanks for reading Heart of Aletheia. If you want the formal record, the LFCT community on Zenodo collects every volume and paper. Each future post will link to the volumes it draws on, so you can read at whatever depth suits you.

