LFIS 20-22: LFCT Infrastructure Update
Nine-Regime Tests and Three Canon Locks
Over the past several months, I’ve been running and refining a nine-regime observational test suite for Light Frame Cadence Theory (LFCT). The suite spans multiple astrophysical environments and observable classes, with the goal of testing not a single effect, but structural consistency across regimes.
That test suite is now stable and publicly archived.
Alongside it, I’ve finalized and published three short infrastructure volumes that lock the framework commitments used throughout the tests. These volumes introduce no new data and no new fitting. Their purpose is to make explicit what had already been relied upon implicitly, so results cannot be reinterpreted after the fact.
The Test Suite
The Nine-Regime Observational Test Suite applies a single representability framework across environments that are normally treated independently. No regime-specific tuning is introduced, and the same structural commitments are used throughout.
The point of the suite is not to maximize fit quality in any one domain, but to test whether a single representational structure can remain coherent across all of them.
Early Observational Signal
Across the nine regimes, the test suite exhibits a consistent structural pattern rather than regime-specific scatter. When environmental routing and representability constraints are respected, residual behavior remains stable rather than diverging with distance or regime.
These results are not presented as definitive proof of the framework, but they are sufficiently coherent to justify formalizing the infrastructure commitments used throughout the analysis. Detailed results, scripts, and replication materials are archived with the dataset.
In parallel with my own runs, portions of the test suite were independently re-executed in a separate analysis environment (Claude), using the same structural assumptions but without shared state. The resulting behavior was consistent at the level relevant for infrastructure commitments, providing encouraging confirmation that the observed patterns are robust rather than implementation-specific.
What was particularly striking was the behavior of the SPARC galaxy sample. When the cadence-balance exponent was allowed to vary, the empirically preferred value was:
Observed slope: Δ_obs = 0.268
LFCT prediction: Δ = 0.25 (exactly 1/4)
The difference is 0.018 (≈7%), but more importantly, Δ = 0.25 produces the minimum scatter across all tested values, with a residual dispersion of MAD ≈ 0.0405 dex. This value was not selected to improve fit quality; it was fixed a priori by the framework. The fact that it coincides with the point of maximal coherence in the data is a structural signal, not a tuning result.
LFIS–20: Exponent Definitions
LFIS–20 fixes the meaning of the two scaling exponents that appear throughout LFCT:
the cadence-balance exponent ∆, and
the induced response exponent δ.
These are not free parameters and are not fit per regime. LFIS–20 records their definitions and relationship explicitly, eliminating ambiguity that can arise when they appear in different observational contexts.
LFIS–21: Cross-Regime Closure
LFIS–21 records the commitment that the same values of ∆, δ, and the universal acceleration scale are applied across all nine regimes without adjustment.
This volume exists to make cross-regime consistency a binding constraint, rather than an informal assumption.
LFIS–22: Completion Without Remainder
LFIS–22 records an accounting rule that had been used implicitly throughout the framework but not previously stated in one place: representational completion leaves no remainder.
Once a representational obligation is fulfilled, it does not persist, accumulate, or require global bookkeeping. There are no correction terms, deferred balances, or hidden ledgers carried forward across regimes.
This rule closes the accounting structure of the framework itself.
Why separate infrastructure?
The infrastructure volumes are intentionally short, declarative, and non-explanatory. They are not papers in the usual sense. Their role is to lock meaning, so that empirical results cannot later be reframed by shifting definitions or assumptions.
Explanations, motivations, and applications remain in the Light Frame Papers and standalone technical notes. The infrastructure simply states what the framework commits to.
Where this leaves things
At this point:
the test suite is public,
the exponent definitions are fixed,
their cross-regime use is explicit, and
the accounting rules are closed.
Anyone interested can now evaluate the results without having to guess which assumptions were in play, or whether they change between regimes.

