Finding a Needle in a Galaxy

CMS Experiment at the LHC, CERN. Run 279685 / Event 178456860, recorded 2016-Aug-27 at 13 TeV. Yellow lines: charged-particle tracks reconstructed in the inner detector. Green and blue blocks: calorimeter energy deposits. What is not shown is everything that did not survive the chart crossing. Image: CMS/CERN.

Look at the image above. Every yellow track you see is a charged particle that survived long enough, and decayed into something stable enough, for the detector to reconstruct it. The collision itself — the primary event, at an effective temperature roughly thirteen orders of magnitude above the detector’s operating environment, in effective temperature terms — is not directly visible. What is visible is a projection: the subset of the event’s products whose spectral support overlaps with what the detector is built to see.

Much of what lies outside the detector’s reconstruction regime is treated operationally as noise, calibration residual, or simply remains unreconstructed.

In this programme, the transforms are treated as the physics target rather than merely a calibration problem.

The search for the needle is not a brute-force scan of a galaxy. It is a constrained inverse problem over regions where our current pipelines have learned to stop looking.

The Thermal Gradient as Physical Object

A TeV-scale collision occurs at an effective temperature of 1013–1016 K. The detector that records it operates near or below room temperature — a gap of thirteen or more orders of magnitude. Standard reconstruction treats this gap as an engineering inconvenience: calibrate it, correct for it, close it as a systematic error budget line, and move on.

We propose a different reading. The thermal gradient between event and detector is not noise to be suppressed. It is a sequence of chart transitions — each one a transform linking an upstream observational regime to a downstream one — and those transforms carry physical information that current pipelines systematically discard.

The displaced secondary vertex visible in some event displays is one familiar example of how upstream processes can become detector-accessible only after intermediate transformations and decays. The calibration pipeline that makes the image intelligible is performing a layered reduction across these regimes. The question this programme asks is whether that reduction is discarding structured information along with the noise.

What if we kept it instead?

The Binary Search Protocol

This is not a proposal to instrument every point along the thermal gradient simultaneously. It is a proposal to treat the gap as a search space and apply the oldest efficient search strategy available: bisection.

Place sensors — or repurpose existing detector layers — at the midpoint of the accessible temperature range. Compare the signal structure above and below that midpoint: an A/B test between adjacent charts. Where the two charts agree, the transform is locally trivial. Where they disagree in structured ways, the transform carries information. Bisect again toward the disagreement. Repeat.

This is not a blind search. The starting points are already known: frequency bands or diagnostic channels often treated operationally as beam-related noise, the radial distances from the vertex where shower models require large corrections, the timing windows discarded as pre-pulse or post-pulse artifact, the calibration residuals that never fully close. Each of these is a location where the pipeline encountered something it could not classify and chose to minimize. In binary search terms, the bug is already partially localized.

Version 12 of the Manifold Relativity series sharpens this protocol into a falsifiable prediction (P23): for two co-located detectors at different effective temperatures, any residual reconstruction disagreement should be tested against a thermal information-geometric separation measure, for which the Fisher information distance is a computable first proxy. The first-sensitive variables are identified: timing resolution, spectral channel fragmentation, inferred resonance lifetimes. If the residual is consistent with zero across all accessible temperature separations, the chart-mismatch mechanism is falsified.

Regime Candidate sensor Status
Hot end — near collision native chart Diamond pixel detectors; transition radiation detectors Existing or mature detector classes; reinterpretation and/or protocol changes required
Intermediate — cascade midpoints Prompt gamma calorimetry at staged radii Incremental instrumentation
Cool end — near detector native chart Microwave waveguide / diode; RF beam monitors Existing infrastructure; currently filtered as beam noise

The HPAIC Computational Architecture

The signal analysis problem this protocol generates is not incrementally harder than what current pipelines handle. It is categorically different. Standard reconstruction looks for known particles in well-characterized detector responses. This programme looks for structured information in what every existing pipeline has been trained to classify as noise, across thirteen orders of magnitude of spectral separation, using inter-chart transforms that have never been formally characterized.

That requires a different computational architecture. The HPAIC framework is proposed as a candidate constrained inverse-problem architecture built from established mathematical families. It is not a proven engine. The specific transform family linking the W-manifold’s spectral structure to detector observables remains the missing theoretical ingredient — the necessary precondition for making these methods scientifically decisive rather than merely suggestive.

Subject to that constraint, the proposed four-layer mathematical stack is:

Layer 1 — Multiscale Flow Reasoning

Borrowing renormalization-group-like intuition as an analogy for structured cross-scale inference — not asserting that standard RG flow applies directly, but using its conceptual machinery to describe how structure evolves across adjacent spectral regimes. (V12 formally indexes the question of whether the vertical comparison maps constitute a Wilsonian RG flow as Open Problem O36.)

Layer 2 — Inverse-Operator Methods

Adjoint sensitivity analysis and backpropagation-style gradient refinement to propagate constraints backward through the transform hierarchy — not to guarantee unique recovery of hidden source structure, but to rank candidate upstream structures consistent with observed downstream residuals.

Layer 3 — Sparse Recovery

Compressed sensing applied to signals that may be weak, intermittently expressed, or present only in specific chart transitions — structure that standard averaging and integration windows systematically wash out.

Layer 4 — Graph-Based Propagation Analysis

Treating the detector cascade as a propagation network rather than a linear reduction pipeline — allowing anomalous flow between regimes to appear as signal rather than as reconstruction failure.

Established mathematics already offers these candidate tools. What is still missing is a sufficiently explicit forward transform family to make them scientifically decisive. The HPAIC architecture does not replace that theoretical work. It is candidate machinery that becomes useful once even a provisional transform family is available.

Because unique recovery should not be assumed, Bayesian inversion is part of the candidate toolkit as well: the aim is to rank plausible hidden structures and transform families, not to claim a single guaranteed reconstruction.

We do not yet possess the full transform family, but we already know the mathematical classes likely needed to search for it.

Falsifiability

The experimental programme has a clean falsifiability profile. Each A/B test between adjacent sensor layers either finds structured inter-chart residuals or it does not.

If the inferred inter-chart transforms are physically trivial across the tested regimes — mere rescalings, no stable structure, no anomalous flow — the programme’s experimental motivation weakens substantially. If they reveal stable structured residuals now discarded by standard reconstruction, then the case for nontrivial cross-chart structure strengthens and part of the phenomenon has been hiding inside the reduction pipeline.

Either result is scientific progress. The binary-search strategy is intended to force a discriminating outcome, whether that outcome strengthens or weakens the programme’s motivation.

The places most often treated as calibration loss, nuisance residuals, or detector noise may instead contain information about nontrivial transforms between observational regimes. The HPAIC framework preserves and interrogates those residuals rather than compressing them away. In this programme, the transforms are not bookkeeping. They are the target.

Update (v12–v13)   Since this post was first published, the Manifold Relativity series has advanced. Version 12 proves three structural propositions (nested accessibility, identity limit, matched-chart consistency), states the P23 chart-mismatch residual as a sharpened falsifiable prediction with explicit null-result conditions, and provides falsification criteria for the programme. Version 13 performs the programme’s first computations, establishing that spectral truncation of product thermal states produces sub-additive entropy composition, and derives an exact analytical formula for the truncation-induced mutual information (Proposition 2.8). A computational addendum preserving the full evidence trail — including epistemic corrections and negative findings — accompanies the v13 preprint.

This post is a related note to the Manifold Relativity preprint series. The formal chart-matching framework, spectral filter ΠT, observer-filter map OT, structural propositions (v12), sub-additivity computation (v13), and the open problems O33, O36, O37–O39 are discussed more formally in the
Manifold Relativity preprint series (v1–v13).

There is obviously a transition from one mapping to another and back that lasts just femtoseconds which is not the same as doing such within a native mapping. V12’s matched-chart consistency theorem (Theorem 3.6) and v13’s space-time complementarity corollary establish the structural framework for this: chart transitions have a well-defined ordinary limit where they become trivial, and the novel physical content resides in the mismatch regime. A future version may address the specific transform families governing these transitions at large spectral separation. In summary, the goal is to do for chart transitions what relativity did for reference-frame changes: make comparison lawful without assuming all observers inhabit the same accessible regime.


Paul E. Sorvik  ·  Manifold Relativity Programme  ·  Alexandria, Egypt
ORCID: 0009-0008-5717-7110
paulsorvik.wordpress.com
Hero image: CMS/CERN, Run 279685 / Event 178456860, 13 TeV, 2016-Aug-27. Used under CERN open licence with attribution.

Posted in Experimental Proposals, Manifold Relativity, Publications, Related Posts | Leave a comment

Version Directions: Community Call for Proposals

© CERN

Call for Proposals: An Invitation to the Community

Manifold Relativity Programme · Standing Invitation · Updated April 2026 (v13 Integration)
Paul E. Sorvik, Principal Investigator · ORCID 0009-0008-5717-7110

Note   This call was originally published as a Version 12 intake window. With v12 and v13 now published, it has been updated as a standing invitation for future versions. References to “the next version” or “future versions” are deliberate.

With Version 13 of the Manifold Relativity preprint series now complete — including the programme’s first computations and a computational evidence trail — the council maintains a standing invitation for community input on future directions. This is a technical intake process, not a popularity vote. The governing principle remains: should before could.

Community input is welcomed as genuine intellectual input, not symbolic participation. Proposals will be considered seriously, but only within the programme’s standing discipline.

The Programme’s Prime Operational Directive

The CAC council operates under the “should before could” principle. A proposal — even a technically strong one — may be declined if the council determines it should not become the next version for reasons of programme integrity, scope alignment, or epistemic discipline. Declined proposals will always receive at least one substantive reason.

Submission

Proposals may be submitted on a rolling basis. The council may announce bounded intake windows for specific future versions when appropriate. Outside a formal window, proposals may be read and indexed without a guaranteed response schedule.

Who may submit: The invitation is open without restriction. Academic affiliation is not required. Anonymity or pseudonyms are acceptable provided the proposal is technically self-contained. A proposal will not be favoured or disfavoured based on the proposer’s affiliation, credentials, or prior agreement or disagreement with the programme.

What a Proposal Should Contain

At minimum: a clear title; the specific problem or question proposed; which version(s), open problem number(s), or formal object(s) from v1–v13 it connects to; whether it is primarily formal/mathematical, interpretive/conceptual, experimental/methodological, computational, or critical/refutational; and the strongest reason the council should prioritise it now.

Recommended: a short abstract (150 words or fewer), a technical core, and a brief statement of epistemic status using the programme’s taxonomy (Established · Supported · Suggestive · Proposed · Computational · Conjectural · Open).

Proposal Categories

Category 1 — Open Problem Prioritisation. Which indexed open problem (O1–O39) is the most critical next formal step, and why? Must include a technical justification of the dependency chain. Current frontier cluster includes O31, O37, O38, and O39.

Category 2 — Formal Mathematical Contribution. A candidate derivation, refutation, or reduction. Example: a formal demonstration of whether the sub-additive composition from spectral truncation (v13, Proposition 2.8) reduces to the Tsallis-Havrda-Charvát q-algebra under a change of variables, or does not. V13 provides concrete computational data for this comparison.

Category 3 — Experimental or Methodological Design. A specific measurement architecture in which the chart-mismatch residual (P23, v12), the millikelvin measurement floor (P20), or the Cross-Chart Disagreement Vector could in principle be tested or falsified. Must specify what result would confirm or falsify the framework’s prediction.

Category 4 — Computational Extension. Proposals to extend the v13 toy computations: larger system dimensions, non-product Hamiltonians, continuous spectra, or alternative truncation geometries. The v13 computational addendum (five Python scripts with raw outputs) provides the starting point. Familiarity with the evidence trail is strongly recommended.

The Five Acceptance Gates

To be considered for a future version, a proposal must clear five gates, in order. Clarity and rigour matter more than novelty.

Gate A — Specificity Must engage with specifically named content from v1–v13: an open problem index number, a definition, an equation, or a formal object. General commentary does not pass.
Gate B — Technical substance Must make a claim that is either true or false, or propose a derivation path that is either valid or not.
Gate C — Epistemic discipline Must correctly label its own epistemic status and not require the council to assert as established what is conjectural.
Gate D — Scope Must be grounded in the W-manifold’s existing postulates. New assumptions must justify why the existing framework is insufficient.
Gate E — Should before could The proposal, if adopted, would produce a manuscript the programme should publish — not merely one it can write.

Outcomes and Dispositions

Not considered — proposals that fail intake quality: non-specific (does not name content from v1–v13), non-technical, epistemic mislabel, out of scope, or purely rhetorical.

Considered but not selected — proposals that cleared intake but were not adopted: premature (requires prerequisites not yet in place), redundant, too broad, lower priority, better as future work, or declined under the prime directive. A proposal that is considered and declined was still taken seriously. That distinction will be clearly marked.

What Future Editions Will Not Do

Regardless of what proposals arrive, future versions will not: claim to have resolved any open problem unless the resolution is formally established within the framework’s postulates; adopt a proposal that requires abandoning the epistemic structure; be built around a proposal whose primary force is rhetorical rather than formally reviewable; publish before the internal CAC production chain completes its full adversarial review cycle; or exceed a scope that can be rigorously executed as a single bounded preprint.

How to Submit

Send proposals to: Manifold-Relativity.Programme@proton.me

The preprint series v1–v13 (including the v13 computational addendum), the Q&A page, and the community technical note are all at paulsorvik.wordpress.com. Familiarity with the current open-problem frontier (O31, O37, O38, O39) and the v13 evidence trail is the recommended starting point.

No response, endorsement, or collaboration is presumed or requested. The council thanks all technically engaged contributors regardless of disposition.


Manifold Relativity Programme · Call for Proposals · Standing Invitation
Paul E. Sorvik, Principal Investigator · ORCID 0009-0008-5717-7110
paulsorvik.wordpress.com
Developed through the Collaborative Augmented Consciousness (CAC) methodology
Claude (Anthropic) · Gemini (Google DeepMind) · ChatGPT (OpenAI)
Contact: Manifold-Relativity.Programme@proton.me

Posted in ai, Calls for Proposals, Manifold Relativity | Tagged , , , , , , , , , , , , , , | Leave a comment

A Technical Note to Researchers in Thermodynamic Relativity, Kappa Distributions, and Generalized Statistical Mechanics

© CERN

A Technical Note to Researchers in Thermodynamic Relativity, Kappa Distributions, and Generalized Statistical Mechanics

Manifold Relativity Programme · Technical Note v15 · v1–v15 · v15 Reconciliation Update · April 2026
Paul E. Sorvik, Principal Investigator · ORCID 0009-0008-5717-7110
Developed through the Collaborative Augmented Consciousness (CAC) methodology
Claude (Anthropic) · Gemini (Google DeepMind) · ChatGPT (OpenAI)

We publish this note not as a demand for response, but as a precise statement of where we believe the Manifold Relativity framework may touch ongoing work in thermodynamic relativity, entropy defect theory, kappa distributions, and the geometry of thermodynamic space. We invite technically capable engagement on the bounded questions identified below. No endorsement, reply, or collaboration is presumed or requested.

The preprint series (v1–v15) is available at manifold-relativity-programme.org. The series was published under the title “Entropy Waves, Coordinate Systems, and the Self-Referential Universe” through v11; from v12, the programme is titled Manifold Relativity: Observer-Dependent Spectral Accessibility. Version 12 consolidates the framework with structural propositions, candidate observables, and explicit falsification criteria. Version 13 performs the programme’s first actual computations, testing whether sub-additive entropy composition emerges from spectral truncation of composite Dirac operators. Version 14 was a referee-hardening cycle that introduced a provisional “domain boundary” reading (rem:domain) in response to a 3-site numerical discrepancy reported in Script 05 of the v13 Computational Addendum. Version 15 is a computational record reconciliation and retraction cycle: a forensic audit identified the v13/v14 discrepancy as a methodological artifact (an arbitrary basis rotation within degenerate subspaces under composite-eigenbasis projection), explicitly retracted the v14 “domain boundary” framing, and verified Proposition 2.8 to machine precision in the tested product-Hamiltonian regime. v15’s contribution is computational-record reconciliation, not new theoretical claims.

Epistemic Taxonomy — All Claims in This Note Established: Formally derived within the framework’s postulates. Supported: Multiple independent lines of internal evidence. Suggestive: Structural resonance without formal proof. Proposed: Formal framework-level extension, not yet externally reviewed. Computational: Reproducible numerical result in the toy/scaling regime studied in v13; not a general proof. Conjectural: Motivated hypothesis requiring formal derivation. Open: Unresolved; formally indexed as an open problem.

v15’s principal contribution sits at the Computational tier: reconciliation of the numerical record rather than a new theorem-level extension. No new claims are added at the Established, Supported, or Proposed tiers in this update.

1. The Framework in Brief

The Manifold Relativity programme proposes that standard spacetime coordinates are emergent projections of a six-dimensional information-geometric structure — the W-manifold — whose coordinates are: Entropy (S), Fisher Information (I), Entanglement (E), Phase (φ), Complexity (C), and Action (A). Observers access this manifold through thermally-bounded spectral charts governed by an observer-filter map ΠT.

Version 10 introduces the candidate Dirac operator DW. Version 11 formalises signal-noise chart-relativity and the Cross-Chart Disagreement Vector. Version 12 proves three structural propositions (Nested Accessibility, Identity Limit, Matched-Chart Consistency) and states falsification criteria. Version 13 performs the first computations, identifies the 2D EFE vacuity correction, and derives an exact analytical formula for truncation-induced mutual information. Version 14 introduced a provisional scope-bounding interpretation of a 3-site numerical discrepancy as a possible “domain boundary” for Proposition 2.8; Version 15 replaced that interpretation with a forensic computational diagnosis (the discrepancy is a basis-selection artifact within degenerate subspaces, not a theorem failure) and explicitly retracted the v14 framing.

The framework’s core epistemological discipline: every claim is classified at the time of publication. Drift from conjectural to declarative framing is the primary failure mode the internal review process is designed to catch — and has caught, on the public record, multiple times across versions, including within v13 itself (see the computational addendum).

2. Points of Structural Contact with the Community

Contact Point A · Updated with v13 Computational Evidence and v15 Reconciliation

κ-Composition Law vs. Tsallis q-Algebra

The programme originally proposed that a κ-type composition law might emerge as a geometric consequence of the observer-filter map ΠT — not from assumptions about entropy extensivity. After v13, the supported claim is narrower: spectral truncation provides a qualitative mechanism for non-extensive composition, while the exact quantitative law remains open.

Update (v13): Composite toy Dirac operators (4×4 through 16×16) were spectrally truncated. Confirmed (Computational): Spectral truncation of product thermal states consistently produces sub-additive entropy. Full-spectrum limit recovers additivity. Sub-additivity scales monotonically with truncation severity. An exact analytical formula was derived (Proposition 2.8): I(A:B) = ln ZV − ⟨ln Q⟩A − ⟨ln P⟩B. This exact result applies to the product-thermal-state truncation setting analysed in v13.

Not confirmed: The specific κ-addition functional form (defect ∝ SA·SB) is not uniquely selected. Alternative sub-additive forms fit comparably well. The exact quantitative bridge remains open.

Update (v15): A 3-site numerical discrepancy reported in Script 05 of the v13 Computational Addendum (numerical value 0.2152715693 against analytical value 0.6069288341 at 4/9 retention, β = 0.5) was used in v14 to motivate a provisional “domain boundary” reading of Proposition 2.8. A forensic audit in v15 resolved the discrepancy as a methodological artifact, not a theorem failure. The diagnosis (Script 07b of the v15 Computational Addendum): the 3-site composite Hamiltonian has a degenerate spectrum whose eigenvectors within each degenerate subspace are returned by numpy.linalg.eigh in an arbitrary orthonormal basis. When the truncation threshold partially retains such a subspace, the resulting truncated state depends on that arbitrary basis choice. The original Script 05 applied a composite-eigenbasis projection that, in this 3-site case, selected a different truncated state than the analytical product-basis mask — the two values were both correct but described two different truncated states. A properly-constructed product-basis numerical check (Method 2A of 07b) agrees with the analytical formula to machine precision at every tested case. Script 07d subsequently confirmed the mechanism by lifting the composite degeneracies via an asymmetric perturbation on subsystem A and observing the gap between analytical and numerical values collapse from 3.92 × 10−1 at ε = 0 to ≤ 10−15 by ε ≥ 10−3. Under a consistent coordinate mask, Proposition 2.8 holds exactly.

Post-v15 status of Proposition 2.8: verified to machine precision across all tested product-Hamiltonian cases — 4×4, 9×9, 16×16, at multiple retention levels, including cases with composite spectral degeneracies. The v14 “domain boundary” framing is explicitly retracted. The open question in this area is not the survival of Proposition 2.8 in the tested product-Hamiltonian regime but the analytical functional form of the sub-additive defect (Open Problem O38) and the extension of the verification chain to genuinely interacting composite Hamiltonians — Hamiltonians that cannot be written as DA ⊗ 𝟙 + 𝟙 ⊗ DB — which remains unverified and is also tracked under O38.

Contact Point B · Updated with v15 Status

Thermodynamic Relativity and the cS Invariant

The v7 preprint proposed a structural dictionary relating the framework’s chart-local temporal-rate quantity cS(T) to invariant structures discussed in thermodynamic relativity (Livadiotis & McComas, 2024). After v13, that bridge remained conjectural: v13 validated the qualitative mechanism — spectral truncation creates non-extensive composition — but did not yet confirm the specific quantitative form. After v15, the product-regime computational foundation underlying this bridge is cleaner: the apparent 3-site anomaly that might otherwise have complicated any cross-framework comparison has been resolved as a methodological artifact (see Contact Point A update), and Proposition 2.8 is now verified to machine precision across the tested product-Hamiltonian cases. v15 does not close the quantitative bridge to thermodynamic relativity. The κ parameter varies with both temperature and truncation fraction in a complex way, and the exact functional form remains open (O38).

Contact Point C · Supported

Temperature as Observer Baseline vs. Thermodynamic State Variable

In the framework, T is the observer’s thermal baseline parameter, not a thermodynamic state variable. V12 makes this convention load-bearing: the spectral filter ΠT projects onto eigenvalues |λ| ≥ kBT/ℏ, and all structural propositions depend on this direction. Whether this is consistent with or a specialization of existing thermodynamic definitions of temperature would benefit from external assessment.

Contact Point D · Updated with v15

Candidate Falsifiability Surface

D1 (Sharpened, v12): Chart-mismatch residuals (P23). For co-located detectors at different temperatures, the first working hypothesis is that any residual scales with a thermal information-geometric separation measure, for which Fisher distance is a computable first proxy. Explicit null-result condition stated.

D2 (Established): Gravitational-electromagnetic hierarchy from KK reduction (v4). Not yet compared against experimental constraints.

D3 (Conjectural): Speed of light as spectral gap bound. Pending verification of invariance under coarse-graining maps.

D4 (Computational, updated v15): Sub-additive entropy from spectral truncation of product thermal states remains supported across 4–16 dimensions. The v15 forensic chain (Scripts 07b, 07c, 07d in the v15 Computational Addendum) resolved the previously reported 3-site apparent anomaly as a basis-selection artifact rather than a theorem failure; Proposition 2.8 is verified to machine precision in the tested product-Hamiltonian regime. The exact functional form of the sub-additive defect remains open (O38). The v13 Computational Addendum is preserved unchanged; the v15 Computational Addendum is a separate self-contained PDF companion to preprint v15.0 documenting the forensic resolution.

Contact Point E · Proposed / Derivable Within Framework

Cross-Chart Disagreement as Potentially Information-Bearing

Unchanged from v11. The Cross-Chart Disagreement Vector |Δ1,2⟩ = (ΠT₂ − ΠT₁)|ψ⟩ and the Multi-Chart Reconstruction Principle remain as stated.

Contact Point F · New in v15 · Computational

Computational Record Reconciliation: Degenerate-Subspace Basis Ambiguity in Spectral Truncation

The v15 cycle shipped a separate Computational Addendum (self-contained PDF companion to preprint v15.0) documenting a three-script forensic chain that reconciles the numerical record around Proposition 2.8. The three scripts are:

  • Script 07b — forensic diagnosis. Four-stage probe reproducing the three method values on the 3-site case (Method 1 analytical product-basis, Method 2A numerical product-basis, Method 2B numerical composite-eigenbasis) and identifying the degenerate-subspace mechanism behind Method 2B’s disagreement with Methods 1 and 2A. Sweeps the full v13 test matrix and locates three disagreement cases, each corresponding exactly to partial-degenerate-subspace retention.
  • Script 07c — preserved failed intermediate probe. Asymmetric perturbation with fixed threshold; lifts composite degeneracies but produces retention-count drift and noisy gap sequences. Preserved unchanged in the addendum per the programme’s discipline of not erasing intermediate findings.
  • Script 07d — fixed-retention mechanism confirmation. Same asymmetric perturbation with retention count fixed by selecting the top N composite eigenvalues by magnitude. The gap between analytical and numerical values collapses from 3.92 × 10−1 at ε = 0 to ≤ 10−15 by ε ≥ 10−3 for the primary N = 4 sweep; supplementary N = 2 and N = 6 sweeps confirm the mechanism is specific to partial-degenerate-subspace retention.

This material is included in the technical note because it should matter directly to external researchers working on non-extensive statistics, spectral truncation methods, and numerical verification of analytical entropy formulas. The forensic chain:

  • distinguishes mask geometry from basis-choice artifacts in spectral truncation — a distinction that is easy to lose when composite eigenvectors are returned in arbitrary orientations within degenerate subspaces;
  • clarifies what Proposition 2.8 does and does not establish: verified to machine precision in the tested product-Hamiltonian regime including composite degeneracies, not yet verified in genuinely interacting composite Hamiltonians (the latter remains under O38);
  • strengthens the seriousness of the programme’s evidence-preservation discipline: the v14 “domain boundary” framing was not quietly dropped between versions but explicitly retracted, the original discrepant output and the failed intermediate probe (07c) are preserved unchanged, and the resolution is independently reproducible from the embedded source listings alone.

The v15 Computational Addendum is available on the programme website at manifold-relativity-programme.org under the v15 materials.

3. Open Problems Relevant to This Community

  • O31: Composition law from spectral truncation. Status: qualitative mechanism supported at toy/scaling level (v13); exact composition law remains open.
  • O36: RG-flow identification. Whether the vertical comparison maps satisfy the formal properties of a Wilsonian RG transformation. Indexed in v12.
  • O37: 3D metric from replacement product lift. Required before the spatial metric can be constrained by EFE recovery. The naive backward-EFE approach is vacuous in 2D (v13 correction). New in v13.
  • O38: O31 functional form determination. Whether the sub-additive defect takes the specific κ-addition form, a different sub-additive form, or a more complex function. New in v13. Post-v15 status: the mechanism behind the v13/v14 3-site discrepancy is now understood (resolved in the v15 Computational Addendum as a basis-selection artifact in a degenerate subspace, not a failure of Proposition 2.8); the analytical functional-form question itself remains open, and extension of the verification chain to genuinely interacting composite Hamiltonians remains unverified.
  • The Finsler Structure Question (formerly O36 in the v11 technical note) remains open but has been separated from the RG-flow identification.

4. The Primary Technical Question (Updated)

Is the sub-additive entropy composition that emerges from spectral truncation of product thermal states on a Dirac-type operator — a mechanism driven by the non-product geometry of the truncation mask rather than by entropy postulates — structurally distinct from the Tsallis-Havrda-Charvát q-algebra?

The v13 computation provided concrete data: an exact formula for the truncation-induced mutual information (Proposition 2.8), numerical results across multiple system dimensions (4×4, 9×9, 16×16), and evidence that neither the simple κ-addition form nor a simple additive alternative is uniquely selected. The v15 cycle reconciled the main apparent anomaly in that computational record: a 3-site numerical discrepancy previously framed in v14 as a possible domain-of-validity boundary was resolved as a basis-selection artifact within a degenerate subspace, and Proposition 2.8 is now verified to machine precision in the tested product-Hamiltonian regime. The remaining technical question posed to this community is therefore not whether Proposition 2.8 survives the tested product-Hamiltonian regime, but whether the resulting sub-additive defect law has a κ-form, a Tsallis-Havrda-Charvát q-form, or some other sub-additive structure — and how (if at all) the verification extends beyond product Hamiltonians to genuinely interacting composite systems. Both the v13 Computational Addendum and the separate v15 Computational Addendum (forensic reconciliation chain) are available for independent verification.

Secondary question (unchanged): Can the Cross-Chart Disagreement Vector be operationally distinguished from standard stochastic instrumental error in multi-detector setups?

5. Engagement

The programme welcomes technically substantive engagement — agreement, critique, correction, or identification of prior work the series has missed. No response is presumed. No timeline is imposed. No endorsement is requested or implied.

The preprint series v1–v15, with full version history and referee corrections on the public record, is at manifold-relativity-programme.org. The v13 Computational Addendum is preserved unchanged and documents the original computational evidence trail. The separate v15 Computational Addendum documents the forensic reconciliation chain (Scripts 07b, 07c, 07d) that resolved the v13/v14 3-site discrepancy; it is a self-contained PDF companion to preprint v15.0 and is available under the v15 materials on the programme website.

Contact: Manifold-Relativity.Programme@proton.me


Manifold Relativity Programme · Preprints v1–v15 · April 2026
Paul E. Sorvik, Principal Investigator · ORCID 0009-0008-5717-7110
manifold-relativity-programme.org
Developed through the Collaborative Augmented Consciousness (CAC) methodology
Claude (Anthropic) · Gemini (Google DeepMind) · ChatGPT (OpenAI)
All claims in this note carry explicit epistemic labels per programme discipline.

Posted in CAC Programme, Manifold Relativity, Technical Notes | Tagged , , , , , , , , , , , , , , , | Leave a comment

Meeting Einstein’s Goals ?

© CERN

Epistemic Note   This answer is stratified by what is formally established within the framework’s postulates versus what remains open. All “Derived” claims are conditional on granting the W-manifold postulates. None are assertions about settled external physics.

v15 does not enlarge the programme’s gravity+electromagnetism unification claim, but it does strengthen the programme’s correction discipline by explicitly reconciling and retracting a separate computational misreading elsewhere in the framework.

Where the framework explicitly claims to have reached Einstein’s goal

Version 4 is unambiguous. It carries a formal remark stating directly: “This is the exact Kaluza-Klein unification Einstein sought: gravity and electromagnetism unified in a single geometric action” (within the framework’s postulates). The mechanism is the compactification of the periodic phase coordinate φ — the Kaluza-Klein reduction of GAB over φ yields the Einstein-Maxwell action with no free parameters. The Maxwell coefficient is set by the ratio of Planck energy to local thermal energy, not fit to data.

The gravitational-electromagnetic hierarchy — why gravity is approximately 1064 times weaker than electromagnetism today — falls out as a consequence of cosmological cooling, not fine-tuning. The dilaton scalar σ = ln(EP/kBT) is not a new particle; it is the age of the universe written as a geometric quantity. At the Planck epoch, σ = 0: gravity and electromagnetism were equally coupled. At the present epoch (T ≈ 2.7 K), σ ≈ 74.

That is a derived result within the framework’s postulates — not yet externally validated, and not yet a settled result in accepted physics. If the W-manifold postulates are granted, the framework presents a thermodynamic reason for the extra geometric dimension rather than appending it ad hoc — the part of the classical unification problem closest to Einstein’s own objective.

Where the framework explicitly says it has not reached

Version 4 also contains the epistemic brake: “It is not Grand Unification in the technical sense of SU(5) or SO(10), which requires including the weak and strong nuclear forces. Those forces require non-zero g and g couplings and are left for future work (Open Problem O16).”

So the framework claims a derived gravity+electromagnetism sector, while leaving the strong and weak sectors open (O16). That places it closer to a classical unified-field programme than earlier editions, but not yet at full unification. The path toward the strong and weak forces runs through non-zero off-diagonal KK couplings — a clear direction, but unexecuted.

What v6, v10, and v12–v13 add to the picture

Version 6 identifies the deeper structure via Connes non-commutative geometry. Connes’ own reconstruction theorem (1994/2006) recovers the Standard Model — U(1) × SU(2) × SU(3) plus gravity — as the spectral action of a specific spectral triple. The framework positions DW as the W-manifold’s candidate for that Dirac operator — a candidate route toward recovering all four forces from a single spectral structure. Version 10 is the first attempt to formally construct DW. This remains conjectural until DW is formally constructed and shown to recover the relevant gauge structure.

Version 12 proves that the framework has a coherent ordinary limit: in matched charts, no vertical transformation is required and reconstruction distortion vanishes (Matched-Chart Consistency Theorem). Version 13 identifies a critical correction: the vacuum Einstein equations are trivially satisfied for any 2D metric, so full EFE recovery requires the 3D replacement product lift (Open Problem O37), not just the 2D (I,E) sector alone. This redirects the gravitational recovery programme to a more structurally informed construction.

Versions 14 and 15 do not extend the Einstein-unification claim itself; their role is elsewhere in the programme, where v15 performs a computational reconciliation and explicit retraction in the mutual-information line of work.

The honest summary

Claim Status
Gravity + EM unified geometrically via KK reduction Derived (within framework postulates, v4)
Hierarchy problem explained without fine-tuning Derived (thermodynamic cooling, v4)
U(1) gauge invariance derived, not assumed Derived (phase coordinate periodicity, v4)
Ordinary-limit structure (matched charts) Derived (v12, Theorem 3.6)
Strong and weak forces Open Problem O16 — unexecuted
Full EFE recovery from emergent spatial geometry Open O1/O2/O37 — requires 3D lift (v13 correction)
Full Standard Model recovery from DW Conjecture — DW not yet built (v10 target)
External validation by physics community Not yet achieved

So: if the v4 derivation holds within its stated postulates, the programme presents itself as advancing the gravity+EM side of the classical unification problem beyond Einstein’s own unfinished route. It is not yet a complete unified field theory in the modern sense of all four forces, and external validation has not occurred. The path to completion runs through DW construction (v10/O25), the strong and weak forces (O16), and the spatial metric derivation (O1/O37).

Provenance: v4 · v6 · v10 · v12 · v13

Posted in ai, Manifold Relativity, Publications, Research | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

The 𝒲-Atlas Translation Guide

Epistemic Discipline
Every claim on this page is a claim within the Manifold Relativity framework (Preprints v1–v15). Claims are classified as Derived (theorem-level within the framework), Proposed (formal framework-level extension, interpretation, or conjecture; in some cases derivable within the framework but not yet externally reviewed), Predicted (falsifiable empirical consequence), or Open (formally indexed as an unresolved problem). None of these are assertions about settled external physics. Status notes within entries may describe a result as conjectural in plain language; this is descriptive prose under the Proposed heading, not a separate label. The programme explicitly lists what it has not yet established.

Jump to:  
The Central Idea  · 
What We Have Derived  · 
The Self-Audit  · 
What We Propose  · 
What We Predict  · 
What Remains Open  · 
What v12–v13 Added  · 
What v14–v15 Added

The Central Idea
What the Programme Is Trying to Do

Start here. Everything else follows from this single thesis.

Q. What is Manifold Relativity trying to change first — the laws of physics, or the coordinates used to express them?

The coordinates. The programme’s central claim is that many familiar pathologies — quantum uncertainty, Big Bang and black-hole singularities, the dark sector, wave-particle duality — are not independent mysteries of nature. They are artefacts of describing reality in the coordinate system native to biological observers: (x, y, z, t).

The programme proposes a coordinate transformation to the six-dimensional master manifold 𝒲, with coordinates (S, I, E, φ, C, A): Entropy, Fisher Information, Entanglement, Phase, Complexity, and Action. Standard spacetime is a projected submanifold of 𝒲 — not the fundamental arena. Manifold Relativity is first a coordinate reformulation programme; new predictions emerge from that shift.

Provenance: v1.02 · v2


Q. What is the historical pattern the programme is building on?

Every major paradigm shift in physics has been a coordinate change that dissolved an apparent mystery. Geocentric to heliocentric eliminated retrograde motion. Minkowski spacetime dissolved the Maxwell–Newton conflict. Kruskal–Szekeres coordinates showed the Schwarzschild horizon was a coordinate artefact, not a physical singularity.

The programme proposes that 𝒲 is the next such coordinate system — one in which the currently unresolved problems of physics are coordinate artefacts rather than fundamental mysteries.

Provenance: v1.02 · v2


Q. Why does the framework use an atlas of charts rather than one global coordinate system?

Because a single global chart is too strong an assumption. Beginning in v8, the programme explicitly models the universe as a 𝒲-manifold covered by an atlas of locally valid, observer-dependent charts connected by transition maps. Different observers or physical regimes access different spectral sectors of the same underlying manifold. The transition laws between charts are part of the physics — not just bookkeeping. This move also tells a physicist something important: the programme recognised an overreach in its early single-chart formulation and replaced it with standard differential-geometric structure.

Version 11 draws out the full consequence of this architecture: if physics is chart-relative, then any single detector or measurement context is structurally incomplete. What a low-temperature detector discards as noise may be coherent signal in a higher-temperature chart. The Multi-Chart Reconstruction Principle, introduced in v11, follows directly: the atlas is not merely a mathematical convenience for avoiding singularities — it is the reason that coordinated measurement across different thermal baselines is physically meaningful, not redundant.

Provenance: v8 · v8.1 · v11


Q. What is the “biological observer chart” Ubio?

It is the effective chart appropriate to warm, macroscopic observers. In v8, it is assigned a strict thermal resolution floor: spectral structure of the 𝒲-manifold at frequencies below k_BT/ℏ is mathematically inaccessible within this chart. The programme therefore treats part of what we call quantum uncertainty, entanglement non-locality, and the dark sector as properties of this chart’s limited access — the shadow of manifold structure the chart cannot resolve.

Provenance: v8 · v8.1


Q. When does a chart fail, and what happens then?

A chart remains valid while the observer’s thermal floor stays well below the local eigenvalue gap of the manifold’s spectral structure. When that gap closes, the chart loses the ability to reconstruct local geometry faithfully and becomes singular. The framework gives three concrete examples: at S → 0 (Big Bang), the biological chart fails as polar coordinates fail at a pole. At C → C_max (black hole interior), it fails as stereographic projection fails at the antipodal point. At T → T_P (Planck regime), the continuous phase φ becomes discrete and differential calculus fails. In each case a different chart is needed.

Provenance: v8 · v8.1


Q. What does v11 add to the central picture?

Version 11 formalises the converse of v10’s chart-filtering question. V10 asked: how does an observer’s chart filter an event? V11 asks: what happens to the part of the event that the chart filters out? The answer is the signal-noise inversion. What one detector classifies as noise — spectral structure lying above its thermal floor — is not absent from the W-manifold. It is event content whose native chart lies at a higher thermal baseline. A different detector operating at a higher temperature would resolve the same structure as signal. Signal and noise are chart-relative classifications, not absolute properties of the measured event.

This single observation has three consequences formalised in v11: a new mathematical object (the Cross-Chart Disagreement Vector |Δ₁,₂⟩), a formal proposition establishing that inter-chart disagreement encodes latent structure (Proposition 5.1), and a methodological principle stating that no single detector chart should be presumed privileged for complex event reconstruction.

Provenance: v11

What We Have Derived
Theorem-Level Results Within the Framework

These results are derived at theorem or formal-proof level within the framework’s definitions — what follows necessarily from the 𝒲-manifold structure once the coordinates are accepted.

Q. Does the framework recover the known Hubble–temperature scaling from first principles?

Yes. The Hubble parameter in the radiation-dominated era is derived directly from the 𝒲-manifold metric:

H = √(8π³/90) · t_P · ς²,   where ς = k_BT/ℏ

This recovers the known H ∝ T² scaling with the Planck time as the unique proportionality constant — no free parameters. The Planck time is not inserted; it emerges as the structural bridge between the cosmological and quantum decoherence scales.

Provenance: v1.02


Q. Where does the speed of light come from?

The programme derives c = l_P / t_P as a graph-theoretic bandwidth limit. The fundamental substrate is modelled as a degree-3 Planck-scale expander graph G₁. A coherent entropy wave propagating across G₁ can traverse at most one edge per Planck time before the graph’s spectral gap collapses — by the expander mixing lemma, propagating faster destroys wave coherence and dissolves the signal into incoherent thermal noise. The speed of light is therefore the maximum coherence-preserving propagation rate the underlying graph can sustain. It is not assumed equal to l_P/t_P; that equality is the content of the theorem.

Provenance: v3 · v4


Q. What is the geometric origin of electromagnetism?

The compact periodic phase coordinate φ of the 𝒲-manifold supports a Kaluza-Klein reduction yielding the Einstein–Maxwell action with coefficient determined entirely by the phase metric — no free parameters. U(1) gauge invariance is the freedom to locally redefine the entropy wave phase origin. The electromagnetic vector potential emerges from the off-diagonal metric component g = −1/ς. Magnetic flux is the holonomy of the rotation map around cycles of the local phase graph. The framework does not require unobservable curled-up spatial dimensions; the “fifth dimension” is the entropy wave phase φ, a derived physical coordinate present since v1.

Provenance: v4


Q. What is the dilaton field?

The dilaton scalar of the Kaluza-Klein reduction is identified as:

σ = ln(E_P / k_BT)

It is the cosmic temperature written as a geometric quantity — not a new particle. At the Planck epoch, σ = 0: gravity and electromagnetism were equally coupled. At the present epoch (T ≈ 2.7 K), σ ≈ 74: the phase dimension has compactified by e^74, concentrating the electromagnetic coupling. The gravitational-electromagnetic hierarchy is not fine-tuned; it is the age of the universe written in one number.

Provenance: v4 — Theorem (Dilaton Identification)


Q. Why does the principle of least action work?

Standard physics takes the Euler-Lagrange variational principle as an axiom. Within the 𝒲-manifold, least-action paths are geodesics along the A-axis: straight lines in Action-coordinate space. The principle is the definition of a straight line in the correct coordinate system, not an independent postulate imposed on nature.

Provenance: v1.02


Q. What is the Wick rotation, geometrically?

The Wick rotation — the substitution t → iτ used throughout quantum field theory — has no accepted physical interpretation in standard physics. Within the 𝒲-manifold it is derived as a decoherence geodesic in the (S, A) plane: a real geometric trajectory in the entropy-action sector of the manifold.

Provenance: v1.02


Q. Is time treated as a fundamental dimension in this framework?

No. A core architectural principle of the framework, established from v1, is that time is chart-local rather than globally primitive. The programme models the local speed of time as Υ = dS/dτ|_{U_T}: the rate at which entropy is processed within an observer’s chart. Time emerges from entropy dynamics rather than being posited as a background dimension.

Provenance: v1.02 · v8 · v9


Q. Does the framework say anything about why space has three dimensions?

Yes. In the discrete-geometric development, the programme derives three spatial dimensions as the unique dimensionality compatible with the cycle structure of the replacement-product graph, the isotropy requirements, and the holographic structural conditions of the projection mapping. This emerges from the graph’s combinatorial properties, not from a postulate.

Provenance: v3 · v5.3 (survived hardening step)


Q. What did v12 prove about the atlas structure?

Three narrow structural propositions from the v8–v10 definitions. (1) Nested Accessibility: Higher-temperature observers access strictly smaller spectral sectors — the family {UT} forms a monotone filtration indexed by temperature. (2) Identity Limit: The vertical comparison map between charts approaches the identity as temperature separation vanishes. (3) Matched-Chart Consistency: When observer and event share the same chart, no vertical transformation is required and reconstruction distortion vanishes identically. These results establish that the atlas has a coherent ordinary limit: in matched charts, no vertical transformation is required and reconstruction distortion vanishes, while the framework’s novel content resides exclusively in the chart-mismatch regime.

Provenance: v12 — Propositions 3.1, 3.4, Theorem 3.6


Q. What is the exact formula for truncation-induced mutual information?

Proposition 2.8 of v13 derives the exact mutual information created when a product thermal state is spectrally truncated: I(A:B) = ln ZV − ⟨ln Q⟩A − ⟨ln P⟩B, where Qi measures the B-weight accessible from each A-eigenstate and Pj vice versa. This is a Jensen gap measuring the non-uniformity of spectral accessibility across subsystems. The mutual information vanishes if and only if the truncation mask is a product set (Corollary 2.9). The non-product geometry of the truncation mask is the sole source of apparent correlation — spectral truncation couples subsystem indices through the threshold condition, creating apparent correlation in an initially uncorrelated state. This exact result applies to the product-thermal-state truncation setting analysed in v13; it is not yet a general proof for arbitrary Hamiltonians or full O31. Post-v15 status: verified to machine precision across all tested product-Hamiltonian cases (4×4, 9×9, 16×16) including those with composite spectral degeneracies, via the forensic evidence chain in the v15 Computational Addendum (Scripts 07b and 07d). The apparent counterexample reported in Script 05 of the v13 addendum was resolved as a basis-selection artifact in a degenerate subspace, not a failure of the proposition.

Provenance: v13 — Proposition 2.8, Corollary 2.9; v15 — Computational Addendum (verification to machine precision)


Q. What is the Space–Time Complementarity Corollary?

Derived from v12’s Nested Accessibility (Nacc decreases with T) and the v1 temporal rate (cS = kBT/ℏ increases with T): spatial spectral richness and temporal processing rate are inversely related across temperature. Applied to cosmological cycle endpoints, this motivates the proposed asymptotic picture developed in v13 (Hypothesis 4.2): minimal spatial reconstruction at the hot endpoint, maximal spatial richness with degenerate temporal coordinate at the cold endpoint.

Provenance: v13 — Corollary 4.1 (Derived); Hypothesis 4.2 (Proposed)

The Self-Audit
What the Programme’s Own Hardening Step Found

Preprint v5.3 contains a formal survival analysis — an explicit classification of all prior results into those strengthened by the transition to rigorous discrete mathematics, those unchanged, and those requiring qualification. A programme that names what it has had to qualify is more trustworthy than one that does not.

Result Verdict Notes
Dilaton-as-temperature identification Unchanged Survived transition to discrete definitions intact.
Speed of light as spectral-gap limit Unchanged Expander mixing lemma argument holds in discrete setting.
Three spatial dimensions from cycle structure Unchanged Replacement-product graph cycle structure preserved.
Einstein–Maxwell action from KK reduction Unchanged Phase coordinate compactification argument intact.
M–σ relation derivation Unchanged Exponent 4.38 at typical galaxy mass survives.
1D CFT entanglement entropy scaling Strengthened Rigorous discrete Fisher definition sharpens the cut-state argument.
Rotation map as Levi-Civita connection Strengthened Discrete graph formulation makes the identification more precise.
Full higher-dimensional Ryu–Takayanagi area-law Qualified 1D scaling holds; higher-dimensional projection remains Open Problem O4.

v15 — Second Documented Self-Correction: Computational Record Reconciliation
The v5.3 self-audit tabulated above was the programme’s first documented self-correction: a theorem-level hardening cycle that re-derived, re-strengthened, or re-qualified each claim under tighter mathematical discipline. In v15 the programme performed a second kind of self-correction — not theorem hardening, but computational record reconciliation.

Script 05 of the v13 Computational Addendum reported a 3-site chain numerical value (0.2152715693) that disagreed with the analytical value from Proposition 2.8 (0.6069288341). Preprint v14 hypothesised this as a “domain boundary” — a regime where the analytical formula required restricted scope. The v15 forensic audit (Scripts 07b, 07c, 07d in the v15 Computational Addendum) reproduced both values, identified the mechanism as a methodological artifact of the original verification script’s composite-eigenbasis projection across degenerate subspaces, and confirmed Proposition 2.8 to machine precision. The v14 “domain boundary” framing was explicitly retracted in v15.

The programme preserves the full evidence trail: the original positive v13 result, the discrepant Script 05 output, the failed intermediate probe (Script 07c, with its fixed-threshold design and noisy gap sequence), and the final mechanism confirmation (Script 07d). None of these intermediate artifacts are erased. This is the self-audit discipline operating at the computational record level, in addition to the theorem level. See the “What v14–v15 Added” section below for the full forensic narrative.

What We Propose
Formally Grounded Conjectures

These entries are formally grounded in the 𝒲-manifold structure. Most require further derivation — particularly, formal construction of the Dirac operator DW — before they achieve theorem status. The v11 entries (wave-particle duality, Cross-Chart Disagreement Vector, Multi-Chart Reconstruction Principle) are derivable from the observer-filter map ΠT without requiring DW; they carry the Proposed label. All entries below are framework-level proposals, with status notes specifying whether they are derivable extensions, interpretive proposals, or conjectural claims.

Q. How does the framework interpret the Heisenberg Uncertainty Principle?

The programme proposes that uncertainty is the Cramér-Rao bound on the Fisher information coordinate I of the 𝒲-manifold. The metric component gII = ℏ²/4I is the quantum Fisher metric; the statistical Cramér-Rao lower bound on this metric is structurally identical to ΔxΔp ≥ ℏ/2. In v8, this is deepened: quantum uncertainty is identified as the non-commutative residue — eigenvalues of DW below the biological chart’s thermal floor that the chart cannot resolve. Uncertainty is not fundamental fuzziness; it is the shadow of what the observer chart cannot see.

Status: Conjecture. Full derivation from DW remains an open goal. Note: v11 deepens the chart-relative framing of uncertainty without requiring DW; the full spectral triple derivation is a subsequent milestone.

Provenance: v1.02 · v5 · v8


Q. How does the framework interpret wave-particle duality?

The programme proposes duality is a chart artefact. The entropy wave propagating in the (S, φ) sector of the 𝒲-manifold is the primary object; the discrete Planck-scale graph G₁ is the substrate. In v11, this is formalised as a consequence of the observer-filter mechanism. The observer-filter map ΠT projects the full event state |ψ⟩ onto the eigenspace of the candidate Dirac operator DW below the thermal cutoff λmax(T). In the biological observer chart Ubio, λmax(Tbio) lies below the eigenvalue range associated with the phase coordinate φ. The φ structure — which governs the periodic entropy wave — is treated as falling below the operative chart threshold and is therefore not reconstructed in that chart; the remaining projection onto the Action and Fisher Information axes produces a localised, particle-like reconstruction. At a higher thermal baseline T′, the phase structure lies within the resolved band and the same event reconstructs as wave-like. Neither is the fundamental object; the W-manifold entropy wave is.

Status: Proposed in v11 as a formal consequence of the observer-filter formalism (chart-dependent reconstruction classes). Not a claim to have derived the Born rule or resolved quantum measurement. This is a geometric reinterpretation, not a proof. Epistemic label: Proposed.

Provenance: v3 · v4 · v8 · v11


Q. What does the framework say a singularity is?

The programme proposes that Big Bang and black-hole singularities are chart pathologies — failures of the observer’s coordinate map — not failures of physical reality. They are Jacobian singularities: breakdowns of the chosen description at the edges of its validity domain, precisely analogous to the failure of latitude-longitude coordinates at the Earth’s poles. The Big Bang is proposed as the boundary of the entropy coordinate’s domain (S → 0). The black-hole interior is proposed as the saturation point of the complexity coordinate (C → C_max). In neither case does the underlying 𝒲-manifold become undefined — a different chart is needed.

Status: Conjecture. Formal cosmological chart construction is Open Problem O29.

Provenance: v1.02 · v2 · v8


Q. How does the framework interpret time dilation?

Building on the derived principle that time is chart-local, the framework proposes that relativistic time dilation is the local observer chart registering a different rate of entropic processing. Two charts at different thermal or kinematic states generate entropy at different rates; what we measure as “time slowing down” is the difference in those rates projected onto the biological observer’s reconstruction of a shared timeline.

Status: Proposed interpretation. The structural foundation (time as entropy-rate) is derived in v1; the interpretation of SR/GR dilation as differential entropic processing is a conjecture pending formal derivation.

Provenance: v1.02 · v8 · v9


Q. What is the dark sector in this framework?

Dark energy is hypothesised to be the spectral action cost Tr(f(DW/Λ)) — the cost of the manifold’s self-description. Dark matter is hypothesised to be the gravitational footprint of the non-commutative residue of the 𝒲-algebra: spectral structure that exists in the manifold but falls below the biological chart’s thermal resolution floor. The 95% of the energy budget we cannot directly observe is, on this proposal, not a population of invisible particles — it is the portion of the manifold’s structure that the biological chart shadows.

Status: Conditional conjecture. Requires formal construction of DW.

Provenance: v6 · v8


Q. How does the framework interpret quantum non-locality?

The programme proposes that non-locality is a projection illusion. The fundamental arena G₁ is an expander graph with logarithmic diameter: the graph-theoretic path length between any two nodes is extremely short relative to the total number of nodes. Two particles that appear billions of light-years apart in the smooth 4D spacetime projection may be adjacent vertices in the underlying G₁ topology. They are not communicating faster than light — they are standing next to each other in the fundamental geometry.

Status: Core structural consequence of the discrete graph assumption. Formally conjectural pending DW construction.

Provenance: v3 · v5 · v6


Q. How does the framework model decoherence?

Decoherence is not introduced as an external collapse postulate. It is modelled as a geometric trajectory in the (S, A) sector of the 𝒲-manifold — the same sector in which the Wick rotation appears as a decoherence geodesic. The quantum-to-classical transition is, on this proposal, a path in entropy-action space: geometry rather than a separate physical law.

Status: Formal conjecture grounded in v1 metric structure.

Provenance: v1.02


Q. What is the Cross-Chart Disagreement Vector, and why does it matter?

It is the primary new formal object introduced in v11. For two detectors with thermal baselines T1 < T2 measuring the same event |ψ⟩, the Cross-Chart Disagreement Vector is defined as |Δ1,2⟩ = (ΠT₂ − ΠT₁)|ψ⟩ — the difference between the two chart projections of the event. Proposition 5.1 (v11) establishes that if the event has non-zero spectral amplitude in the eigenvalue band (λmax(T1), λmax(T2)] of DW, then the disagreement vector is non-zero and its norm encodes event structure that neither detector alone recovers. The spectral support condition is important: if the event has no structure in that specific inter-chart band, the two detectors will agree perfectly even at different temperatures. Disagreement is not automatically information — it is information only when the event occupies the spectral gap between the two charts.

The supporting operator R1→2 = ΠT₂(I − ΠT₁) isolates what is noise in chart 1 and signal in chart 2. From the spectral nesting property (ΠT₂ΠT₁ = ΠT₁ for T1 < T2), this simplifies to R1→2 = ΠT₂ − ΠT₁, showing that R1→2|ψ⟩ = |Δ1,2⟩ exactly.

Status: Proposed / derivable within framework. Proposition 5.1 follows from the spectral nesting property of ΠT; it has not been subjected to external review. Epistemic label: Proposed.

Provenance: v11


Q. What is the Multi-Chart Reconstruction Principle?

A methodological consequence of the signal-noise inversion, stated formally in v11: no single detector-native chart should be presumed privileged for the complete reconstruction of a complex event. Coordinated inference across multiple detector-native charts with distinct thermal baselines may recover event structure inaccessible to any individual chart. The principle implies that some of what experimental practice calibrates away — treating inter-chart disagreement as noise to be minimised — may be the signal whose native chart lies above the instrument’s thermal floor. The programme does not prescribe a specific reconstruction algorithm; that is future work. It provides the principled reason to expect that cross-chart inference is informationally richer than single-chart inference.

Status: Proposed as a methodological principle. The specific conditions under which multi-chart combination gives a well-defined reconstruction operator remain open (Conjectural). Epistemic label: Proposed for the principle; Conjectural for specific recovery formulae.

Provenance: v11


Q. What does the framework propose about the arrow of time?

The microscopic laws of physics are time-reversible, yet macroscopic experience is irreversible. The programme proposes, in v10 as an ansatz, that the candidate Dirac operator DW governs a fundamentally unitary and reversible base geometry. Irreversibility is proposed to arise because macroscopic observers are confined to thermal charts with a strict spectral truncation floor ΠT. Low-eigenvalue structural information is continuously projected out of the observer’s accessible chart as the universe evolves. What we experience as irreversibility is the unavoidable, continuous loss of below-floor spectral information through the observer’s truncated lens.

Status: Ansatz in v10. The candidate operator DW is not yet formally constructed; this is a directional conjecture, not a derived result. Status will be updated upon completion of the spectral triple programme.

Provenance: v10 — Ansatz

What We Predict
Falsifiable Empirical Consequences

These are representative predictions drawn from the programme across v1–v11. Note: Version 11’s primary contribution is methodological and interpretive rather than a new set of numbered empirical predictions. V11 reframes existing predictions (particularly the chart-mismatch residuals from v10) through the cross-chart disagreement formalism and motivates a class of multi-chart measurement protocols; these are labelled Conjectural pending experimental protocol design. v14–v15 did not primarily add new empirical predictions; they clarified the computational status of existing claims (see “What v14–v15 Added” below).

P1 — Hubble–Decoherence Relation
In the radiation-dominated era: H = √(8π³/90) · t_P · ς², with the Planck time as the unique proportionality constant. A precise quantitative relation not built into ΛCDM by construction. Testable against precision cosmological surveys.

v1.02
P13 — M–σ Relation Bending
The black-hole mass–velocity-dispersion exponent is mass-dependent, ranging from 4.0 to 5.0, with the value 4.38 at typical galaxy mass derived from first principles. The preprint presents this as consistent with existing survey data (McConnell & Ma 2013); independent verification remains an open empirical task.

v2
P19 — Entanglement Entropy Scaling
For n-particle entangled states with n > 3: entanglement entropy scales logarithmically with n, not linearly as pairwise-additive models predict. Testable with current ion trap and photonic entanglement experiments.

v6
P20 — Measurement Uncertainty Floor
Ultra-precision measurements at millikelvin temperatures should show a systematic uncertainty floor scaling as ℏ/k_BT_apparatus that does not decrease with improved instrumentation — a non-commutative residue effect, not an engineering limitation.

v6 · sharpened v12
P23 — Chart-Mismatch Reconstruction Residual (v12)
For two co-located detectors at different effective temperatures, after standard calibration, any residual reconstruction disagreement should be tested against a thermal information-geometric separation measure. The Fisher information distance is proposed as a computable first proxy. First-sensitive variables: timing resolution, spectral channel fragmentation, inferred resonance lifetimes. The first working hypothesis is that any chart-mismatch residual scales with a thermal information-geometric separation measure, for which the Fisher information distance is a computable first proxy. Null result: if the residual is consistent with zero across all accessible temperature separations, the mechanism is falsified.

v12 — Prediction 4.1

What v12–v13 Added
The Programme’s Recent Advances

Versions 12 and 13 mark a transition from architectural development to computation and consolidation. V12 renamed the series, proved structural propositions, and stated falsification criteria. V13 performed the first actual calculations in the programme’s history.

Q. Why was the series renamed from “Entropy Waves” to “Manifold Relativity”?

The mature framework does not model entropy as a local scalar field propagating on standard spacetime. The historical title “Entropy Waves” invited precisely the category of critique that addresses a theory the programme does not propose. From v12, the programme is Manifold Relativity: Observer-Dependent Spectral Accessibility.

Provenance: v12


Q. Can the programme be falsified?

Yes. V12 states explicit falsification criteria: (1) If co-located detectors at different temperatures show no chart-mismatch reconstruction residual (P23). (2) If the qualitative sub-additivity mechanism from spectral truncation fails to generalise beyond the current toy/scaling regime, or if no viable exact composition law emerges from further analysis (O31/O38). (3) If the spectral filtration is not monotone. (4) If the millikelvin measurement floor is not found (P20). (5) If the Hubble-decoherence scaling fails (P1). The programme states explicit failure conditions.

Provenance: v12 — Section 5


Q. What did the O31 computation find?

V13 performed the first actual computation in the programme. Composite toy Dirac operators (4 to 16 dimensions) were spectrally truncated and the resulting entropy composition was tested. Confirmed: Spectral truncation of product states always creates sub-additive entropy. Full spectrum recovers exact additivity. Sub-additivity strength scales monotonically with truncation severity. Not confirmed: The specific κ-addition functional form is not uniquely selected — alternative sub-additive models fit comparably well. The qualitative mechanism of the v7 bridge conjecture is supported; the exact quantitative composition law remains open.

Provenance: v13 — Section 2


Q. What is the 2D EFE vacuity correction?

The v13 planning cycle proposed solving for gIE by requiring the (I, E) sector’s Ricci tensor to recover the vacuum Einstein equations. This fails: in two dimensions, the Einstein tensor vanishes identically for any metric. The “backward EFE” approach places no constraint on gIE. The actual constraints must operate through the 3D replacement product lift (new Open Problem O37), the determinant condition, Ryu-Takayanagi compatibility, and rotation map convergence.

Provenance: v13 — Section 3


Q. What is the 3+1 Emergence Hypothesis?

V13 organises the spatial and temporal claims into a unified statement (Hypothesis 5.1): (x,y,z) arise from the observer’s reconstruction of the accessible (I,E)-sector geometry, lifted to three dimensions through the replacement product. The temporal coordinate t arises as a chart-dependent ordering parameter associated with entropy processing rate cS(T). Different observer charts reconstruct different effective 3+1 descriptions. Cosmological expansion is reinterpreted as spectral enrichment as the universe cools. This hypothesis is Proposed, not Derived — its derivation depends on resolving O1 and O37.

Status: Proposed. Depends on O1, O37, O39. Not yet Derived.

Provenance: v13 — Hypothesis 5.1


Q. Does the programme preserve unexpected or negative computational results?

Yes. V13 ships with a computational addendum (five Python scripts, raw outputs, and a README) preserving the full evidence trail: the initial positive signal, the scaling extension, the epistemic correction (catching a tautological overclaim), the independent-κ test (showing non-uniqueness), and the analytical formula derivation. Misleading first-pass successes, weaker alternative fits, and negative checks are part of the scientific record, not discarded as noise. The v13 Computational Addendum is preserved unchanged and is now joined by a separate v15 Computational Addendum (see “What v14–v15 Added” below) containing the forensic evidence chain that resolves the v13/v14 3-site discrepancy.

Provenance: v13 — Computational Addendum; v15 — Computational Addendum (separate companion)

What v14–v15 Added
The Reconciliation Cycle

v14 and v15 form a coupled reconciliation cycle, not a new-claims cycle. v14 hardened v13 under referee pressure and introduced a “domain boundary” remark (rem:domain) to bound the scope of Proposition 2.8 in light of a 3-site numerical discrepancy reported in Script 05 of the v13 Computational Addendum. v15 performed a forensic audit, identified the discrepancy as a methodological artifact of the original verification script, and explicitly retracted the v14 “domain boundary” framing. The resolution is the strongest single piece of adversarial-review evidence the programme has produced so far.

Q. What is the v14–v15 cycle about?

It is a reconciliation / retraction cycle at the Computational epistemic tier, not a new-claims cycle. v14 was a referee-hardened submission draft; v15 is a computational-record reconciliation and retraction release. Neither version introduces new theoretical content, new Open Problems, or new empirical predictions. The substantive contribution of v15 is a forensic evidence chain (Scripts 07b, 07c, 07d in the v15 Computational Addendum) that reproduces the v13/v14 3-site numerical discrepancy, diagnoses its mechanism, and confirms Proposition 2.8 to machine precision in the product-Hamiltonian regime.

Provenance: v14 — referee-hardened submission draft; v15 — “Computational Record Reconciliation and Retraction”


Q. What did v14 add?

v14 was a referee-hardening pass on v13. Its main additions were: domain-of-validity bounding for Proposition 2.8 via a new Remark (rem:domain), motivated by the 3-site chain discrepancy reported in Script 05 of the v13 Computational Addendum; expanded literature positioning relative to adjacent work; and conclusion-section sharpening on the α-vs-κ non-uniqueness question for O38. The rem:domain framing hypothesised that the 3-site discrepancy marked a “domain boundary” beyond which Proposition 2.8 required restricted scope. This framing was explicitly retracted in v15 after the forensic audit.

Provenance: v14 — Remark rem:domain (later retracted); literature positioning; O38 conclusion sharpening


Q. Did the analytical formula fail at the 3-site chain in v13?

No, but the programme briefly thought it did. Script 05 of the v13 Computational Addendum reported a numerical mutual information of 0.2152715693 at the 3-site chain (9×9, β=0.5, 4/9 retention), while the analytical value from Proposition 2.8 was 0.6069288341. Preprint v14 hypothesised this as a “domain boundary” where the analytical formula broke down under some interaction-related structure. The v15 forensic audit proved this was a methodological inconsistency inside a degenerate subspace, not a theorem failure.

The mechanism is as follows. The 3-site composite Hamiltonian has a spectrum {−2√2, −√2, −√2, 0, 0, 0, +√2, +√2, +2√2} with multiplicities 1, 2, 3, 2, 1. Only 2 of the 9 composite eigenvectors are pure product states; the remaining 7 are superpositions inside the degenerate subspaces. The original Script 05 used numpy.linalg.eigh, which returns an arbitrary orthonormal basis within each degenerate subspace. When the 4/9 retention threshold partially retained a degenerate subspace, the truncated state produced by the numerical solver was not the same truncated state referenced by the analytical coordinate mask — the two were related by an arbitrary basis rotation. Both values were correct, but they described two different truncated states.

When the composite degeneracies were explicitly lifted by an asymmetric perturbation (Script 07d), the gap between the analytical and numerical values collapsed from 3.92 × 10−1 at ε = 0 to ≤ 10−15 by ε ≥ 10−3. The v14 “domain boundary” hypothesis was explicitly retracted, and Proposition 2.8 is now verified to machine precision across all tested product-Hamiltonian cases, including those with composite spectral degeneracies.

Provenance: v15 — §C.2 (forensic probe narrative); Remark rem:domain (rewritten); v15 Computational Addendum §4 (Script 07b), §6 (Script 07d)


Q. What are Scripts 07b, 07c, and 07d?

The three-script forensic evidence chain embedded in the v15 Computational Addendum:

  • Script 07b — Mechanism Diagnosis. A four-stage forensic probe. Stage 1 reproduces the three method values at the 3-site case (Method 1 analytical, Method 2A numerical in product basis, Method 2B numerical in composite eigenbasis), confirming that Methods 1 and 2A both give 0.6069288341 while Method 2B gives 0.2152715693. Stage 2 diagonalises the composite Hamiltonian and confirms that 2 of 9 eigenvectors are pure product states and 7 are superpositions within degenerate subspaces. Stage 3 sweeps the full v13 test matrix and finds three Method 2B disagreement cases, each exactly where the truncation threshold partially retains a composite degenerate subspace. Stage 4 records the final diagnosis. Build-gate for the v15 addendum PDF.
  • Script 07c — Asymmetric Perturbation with Fixed Threshold. The preserved failed intermediate probe. Designed to lift the composite degeneracies via an asymmetric perturbation (which it does successfully), but held the threshold θ = √2 fixed across the sweep, causing the retention count to drift as eigenvalues crossed the threshold. The resulting gap sequence is noisy rather than monotonically convergent. Retroactively framed in the v15.0 manuscript as a failed intermediate probe; preserved unchanged in the addendum per the programme’s discipline of not erasing intermediate findings. Not a build-gate.
  • Script 07d — Asymmetric Perturbation with Fixed Retention. The mechanism confirmation. Same asymmetric perturbation as 07c, but with the retention count fixed at N = 4 (the v13 4/9 case) by selecting the top N composite eigenvalues by magnitude at each ε. The gap collapses cleanly from 3.92 × 10−1 at ε = 0 to ≤ 10−15 by ε ≥ 10−3. Two supplementary sweeps at N = 2 and N = 6 act as negative controls and show machine-precision agreement throughout, confirming that the discrepancy mechanism is specific to partial-degenerate-subspace retention, not to degeneracy in general. Build-gate for the v15 addendum PDF.

Together the three scripts constitute a reproducible forensic record: one probe that fails (07c), one that diagnoses (07b), one that confirms the mechanism (07d). The full source and outputs are embedded verbatim in the v15 Computational Addendum.

Provenance: v15 Computational Addendum — §§4–6 (script narratives); Appendices C and D (embedded source and outputs)


Q. What is Proposition 2.8 verified in, and what is it not verified in?

v15 establishes Proposition 2.8 in the product-Hamiltonian regime DAB = DA ⊗ 𝟙 + 𝟙 ⊗ DB across all tested retention levels and system sizes (4×4, 9×9, 16×16), including cases with composite spectral degeneracies. Under a properly-constructed product-basis coordinate mask, the numerical mutual information agrees with the analytical formula to machine precision at every tested case.

v15 does not establish Proposition 2.8 for genuinely interacting composite Hamiltonians (i.e., Hamiltonians that cannot be written as the sum of local-only terms). Extension of the verification chain to that regime is part of Open Problem O38 and remains open. The v15 cycle sharpens the understanding of what the question is, but does not close it.

Provenance: v15 — §C.2 (scope statement); v15 Computational Addendum §7 (Scope of Verification)


Q. Did v15 close any Open Problems?

No. v15 is a record-reconciliation release at the Computational epistemic tier. It neither closes O38 (the α/κ functional-form determination) nor introduces any new Open Problems. The mechanism behind the v13/v14 3-site discrepancy is now fully understood, but the analytical functional-form question for O38 is unchanged. No O40 is minted. The programme’s Open Problems list after v15 is structurally identical to the list after v13 and v14: O1 (with the O37 dependency), O4, O31 (with the O38 dependency), O35, O36, O37, O38, O39, and the External Review frontier. The post-v15 update to O38 is a status sharpening only — not a closure — and is reflected in the “What Remains Open” section below.

Provenance: v15 — §13 (Open Problems, unchanged from v14)


Q. Where can I find the v15 Computational Addendum?

The v15 Computational Addendum is a separate self-contained PDF companion to preprint v15.0, available on the programme website at manifold-relativity-programme.org under the v15 materials. The PDF is a 36-page single-file artifact containing the forensic chain narrative, the full source of Scripts 07b, 07c, and 07d as typeset listings, the verbatim console outputs of all three scripts, an in-PDF integrity hash table, and a build-and-verification note. The document is published under a three-tier payload-hash integrity model: the title page displays the SHA-256 of the embedded content payload (byte-identical to the source files used at build time), a per-file hash table appears in Appendix B, and the final published-PDF MD5/SHA-256 are listed on the website download page alongside the PDF itself.

Provenance: v15 Computational Addendum — published 2026-04

What Remains Open
The Programme’s Explicit Frontier

A credible programme states clearly what it has not yet established. The following are open problems formally indexed in the preprint series.

  • O1 — Spatial Metric gIE(I,E) (Core): Derivation of the off-diagonal spatial metric component. The naive backward-EFE approach is vacuous in 2D (v13 correction). The constraint must operate through the 3D replacement product lift (O37). Status: open, constraint structure clarified.
  • O2 — Full Einstein Field Equations: Recovery from the Ricci tensor of the emergent 3D spatial metric. Depends on O1 and O37.
  • O4 — Higher-Dimensional Ryu–Takayanagi: The 1D CFT scaling is established; the complete area-law in arbitrary higher dimensions requires a projection theorem mapping discrete cut sets to codimension-2 extremal surfaces in emergent spatial geometry.
  • O16 — Strong & Weak Forces: Unification requiring non-zero g and g couplings beyond the electromagnetic KK reduction established in v4.
  • O25 / DW Construction (Core): Formal construction of the 𝒲-manifold Dirac operator as a full spectral triple. Version 10 introduced the candidate bare operator DW(0); this is a directional advance, not a completed proof.
  • O31 — Composition Law from Spectral Truncation: Status: qualitative mechanism supported at toy/scaling level (v13). Sub-additivity from truncation is confirmed across 4–16 dimensions; exact κ-addition functional form not uniquely selected. Exact composition law remains open (O38).
  • O35 — Neutrino Stress Test: Neutrinos provide a bounded stress test of the chart-matching framework. Indexed in v11.
  • O36 — RG-Flow Identification: Whether the vertical comparison maps satisfy the formal properties of a Wilsonian RG transformation. Indexed in v12.
  • O37 — 3D Metric from Replacement Product (New, v13): Construct the explicit three-dimensional spatial metric from the (I,E) base metric and the v3 replacement product cycle structure. Required before O1 can be constrained by EFE recovery.
  • O38 — O31 Functional Form (New, v13): Determine analytically whether the sub-additive defect takes the specific κ-addition form, or a different sub-additive form, or a more complex function. Test with non-product Hamiltonians. Post-v15 status: the mechanism behind the v13/v14 3-site discrepancy is now understood (resolved in the v15 Computational Addendum as a basis-selection artifact in a degenerate subspace, not a failure of Proposition 2.8); the analytical functional-form question itself remains open.
  • O39 — Temporal Emergence Map (New, v13): Rigorous construction of the temporal coordinate from the (S,φ) sector, comparable in precision to the spatial emergence programme.
  • External Review: The programme’s internal multi-node peer-review process (CAC) is rigorous and adversarial, but endogenous. Community response and expert critique are the necessary next frontier that internal review cannot substitute for.

Manifold Relativity Programme · Preprints v1–v15
Paul E. Sorvik, Principal Investigator · ORCID 0009-0008-5717-7110
manifold-relativity-programme.org
Developed through the Collaborative Augmented Consciousness (CAC) methodology
Claude (Anthropic) · Gemini (Google DeepMind) · ChatGPT (OpenAI)
All claims on this page are assertions within the Manifold Relativity framework.
Epistemic labels — Derived · Proposed · Predicted · Open — are maintained throughout. Status notes may use “conjectural” as descriptive prose under the Proposed heading.

Posted in ai, Manifold Relativity, Publications, Research | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment