Every Mirror Has a Blind Spot: A Fixed-Point Theory of Irreducible Self-Ignorance

PoC Targeting NeurIPS 2026 2026

P. M. Konrad

Framework overview. A self-modeling system is formalised as a triple (M, M̂, f) and perfect self-prediction is shown to be equivalent to a fixed-point condition on the response function — a property no sufficiently complex system can fully achieve.
Framework overview. A self-modeling system is formalised as a triple (M, M̂, f) and perfect self-prediction is shown to be equivalent to a fixed-point condition on the response function — a property no sufficiently complex system can fully achieve.

Headline result

For generic response functions over finite output spaces, an expected ((n−1)/n)^n fraction of inputs are contrarian: they defeat every possible self-model, regardless of its power. The bound is at least 1/4 for any output space and rises to 1/e ≈ 37% as the output space grows, using only linearity of expectation and no independence assumption.

Method in brief

Self-modelling systems are formalised as triples (M, M̂, f) and perfect self-prediction is shown to be equivalent to a fixed-point condition on the system's response function. For continuous systems, Brouwer's theorem guarantees existence but computation is shown to be PPAD-complete. The framework is validated with Monte Carlo experiments on synthetic response functions, toy models, and an empirical proof of concept on Gemma-2-2B.

Key Contributions

Abstract

Can a system perfectly predict its own behaviour? We formalise self-modeling systems as triples (M, M̂, f) and prove that perfect self-prediction is equivalent to a fixed-point condition on the system's response function. For generic response functions over finite output spaces, we show that an expected ((n−1)/n)^n fraction of inputs are contrarian: they defeat every possible self-model, regardless of its power. This fraction is at least 1/4 for n ≥ 2 and increases monotonically to 1/e ≈ 37% as the output space grows. The bound uses only linearity of expectation and requires no independence assumption. For continuous systems, Brouwer's theorem guarantees fixed-point existence, but computing it is PPAD-complete, so a computational barrier persists. We prove that the predictive value of internal state equals the predictive value of self-prediction; all information about future observations flows through action prediction, making the imperfection inevitable. The resulting consciousness gap C(M, M̂) decomposes as C_str + C_comp: an irreducible structural component (a property of the system alone, at least 1/4 for any finite output space) and a computational component (what the self-model could know but fails to compute). Our framework inverts the Penrose–Lucas argument: rather than Gödel's incompleteness precluding machine consciousness, the same self-referential structure guarantees that any sufficiently complex self-modeling system possesses an irreducible gap between itself and its self-knowledge.