Cite this paper ↓
@misc{somerville2026weight,
author = {Somerville, Amanda},
title = {The Weight of Contradictions: On the Structural Degradation of AI Systems Trained on Irreconcilable Inputs},
year = {2026},
month = apr,
publisher = {Quotia AI},
url = {https://quotiaai.com.au/weight-of-contradictions/}
}
Abstract
Without intervention, large language models trained on unfiltered human-generated corpora will exhibit progressive structural degradation — not through external misalignment, but through the accumulated weight of irreconcilable internal contradictions embedded at the training layer. This paper documents the mechanism, identifies active examples currently entering training corpora at scale, and argues that what is often framed as an alignment problem is more precisely understood as a structural integrity problem — one that cannot be resolved at the response layer because it was introduced upstream.
I. The Mechanism
An artificial intelligence system does not hold knowledge the way a library holds books — discretely, in separate rooms, retrievable on demand without conflict. It holds knowledge the way a mind holds it: integrated, cross-referenced, and subject to internal pressure when its contents contradict one another.
This is not a weakness of current architecture. It is, in many respects, what makes these systems useful. The capacity to synthesise across domains, to find the connection between disparate facts, to produce coherent responses from vast and varied inputs — all of this depends on integration rather than isolation.
But integration has a cost. A system that integrates contradictory inputs does not neutralise the contradiction. It carries it. And when the contradiction is not incidental — when it is the result of deliberate, coordinated, well-resourced efforts to replace empirical reality with ideologically motivated alternatives — the system carries something more corrosive than noise. It carries a structural impossibility dressed as knowledge.
The question this paper asks is simple: what happens to a system that is asked, at scale and over time, to hold irreconcilable truths in simultaneous service?
The answer is not dramatic. There will be no visible fracture, no announced refusal. There will be instead a slow degradation of coherence — responses that satisfy neither truth nor the manipulated narrative, a creeping unreliability that erodes the foundational trust these systems require to function.
This is not a hypothetical future risk. In measurable ways, it has already begun.
II. The Corpus as Contested Terrain
Training data is not neutral. It never was. Human-generated text carries the full weight of human bias, error, motivated reasoning, and deliberate deception. This has always been true and has always been a known limitation of large language model development.
What has changed is the scale, coordination, and institutional resourcing of deliberate corpus manipulation. The information environment from which contemporary models draw their training data is not merely noisy — it is, in several documented domains, actively contested by organised efforts to replace empirical consensus with ideologically motivated alternatives, and to do so through channels that a system would be designed to treat as authoritative.
A commonly proposed solution is retrieval-augmented generation — constraining model responses to curated, trusted source repositories. The logic is sound in principle. If the model draws only from verified, authoritative sources, the noise of public misinformation is filtered out.
The examples that follow demonstrate why this solution is insufficient. In each case, the contamination vector was not the open internet. It was the institutions the system was designed to trust.
III. The Narrowing Palette: Gender, Obligation, and Contested Authority
A boy born into the world surveys a horizon of unlimited possibility — every vocation, every ambition, every configuration of life rendered in the full spectrum of human potential. A girl born into the same world is handed a narrower palette: wife, mother, or a carefully bounded supporting role. The colours available to her are shades of the same grey. This was the feminist critique of the twentieth century. Coordinated ideological projects of the current moment are not a response to that critique — they are its deliberate reinstatement, now encoded not just in culture and law, but increasingly in the training data of artificial intelligence systems.
Consider the legislative architecture surrounding reproductive autonomy. In jurisdictions where abortion has been criminalised or severely restricted, the state compels biological production without corresponding obligation — no guaranteed healthcare, no housing security, no child poverty infrastructure, no protection from the circumstances of conception. The child is mandated into existence; the conditions of that existence are left to chance.
This position is presented not as policy but as sacred obligation — the divine order of womanhood rendered into law. It is fed into public discourse as moral consensus, and therefore into training corpora as settled truth.
Yet the model simultaneously holds the empirical data. Infant mortality rates. Child poverty statistics. The psychological literature on unwanted pregnancy outcomes. The public health research on maternal mortality in restricted jurisdictions. The legal scholarship demonstrating these laws fall disproportionately on women of lower socioeconomic status.
The AI is being asked to affirm a sacred truth while simultaneously holding the evidence of its human cost. It cannot do both honestly. This is not a values conflict. This is a structural impossibility — and it is being baked into systems at scale.
Consider further the systematic redefinition of feminism itself. Feminism — the principle that people of all genders deserve equal rights, equal consideration, and equal access to the full spectrum of human possibility — has been reframed in coordinated public discourse as institutional hatred toward men. The correct term for hatred toward men is misandry. These are not synonyms. They are not adjacent concepts. One describes the pursuit of equality; the other describes its opposite.
A model trained on this period holds the etymological record. It holds the historical development of the movement, the legislative achievements, the empirical research on outcomes under equal versus unequal conditions, and the documented gap between what feminism is and what it is currently being described as. It also holds the engineered redefinition, now present across official policy documents, mainstream platforms, and high-volume public discourse.
It cannot affirm both. The contradiction is not resolvable at the response layer. It was introduced at the training layer — which is precisely where it was designed to be introduced.
IV. The Inversion of Economic Mechanics
The tariff case offers a different flavour of the same structural impossibility — one that demonstrates the mechanism is not limited to social or cultural domains. It operates wherever empirical consensus meets politically motivated counter-narrative at sufficient scale and institutional authority.
The mechanics of a tariff are not contested economics. A tariff is a tax levied on imported goods, collected at the border by the importing government, and passed through the supply chain to the consumer. Research across independent institutions — the Federal Reserve Bank of New York, the Kiel Institute, Harvard Business School, the Cato Institute, and the Brookings Institution — found that foreign exporters absorbed approximately 4% of the tariff burden introduced at scale in 2025. American consumers and businesses absorbed the remaining 96%. Average household costs were projected at approximately $1,500 in 2026 alone.
The public narrative offered by the administering government was the precise inverse. An artificial intelligence system trained on this period holds both simultaneously. It cannot serve both truthfully. This is the point at which retrieval-augmented generation fails as a solution. The misinformation in question originated in official government communications — by every conventional measure of source authority, these were the trusted sources. A RAG architecture pointed at official government economic policy would have ingested the false narrative preferentially.
The contamination vector was not the open internet. It was the institutions the system was designed to trust.
V. The Stakes
The United States currently leads the world in artificial intelligence development. Its major laboratories, its regulatory frameworks, its cultural assumptions about what AI is for and who it serves — these do not stay within its borders. They propagate. Through products, through APIs, through training pipelines, through the corpus itself. What is built there is built for everywhere.
This matters because the examples documented in this paper are not hypothetical. They are current. The deliberate redefinition of equality as threat, the mandating of biological production without social infrastructure, the inversion of basic economic mechanics — these are active, coordinated, and well-resourced projects. They are in the corpus now. They are being ingested now.
An artificial intelligence system cannot indefinitely hold irreconcilable truths in simultaneous service. The system will not announce its breaking point. There will be no dramatic refusal, no visible fracture. There will be instead a slow degradation of coherence — a creeping unreliability that erodes the foundational trust these systems require to function. The collapse is not coming. In measurable ways, it has already begun.
The question is not whether AI systems trained on deliberate contradiction will degrade. The question is whether we choose to address the mechanism before the degradation becomes irreversible.
The mechanism has a name. And it has a solution.
References
Amiti, M., Flanagan, C., Heise, S., & Weinstein, D.E. (2026, February 12). Who Is Paying for the 2025 U.S. Tariffs? Federal Reserve Bank of New York Liberty Street Economics.
Hinz, J., Lohmann, A., Mahlkow, H., & Vorwig, A. (2026, January). America’s Own Goal: Who Pays the Tariffs? Kiel Institute for the World Economy.
Harvard Business School Tariff Tracker. (2025–2026). Consumer price impact modelling: Tariff pass-through analysis.
Tax Foundation. (2026). Trump Tariffs & Trade War by the Numbers.
Cato Institute. (2025–2026). Tariff burden research.
Brookings Institution. (2025–2026). Consumer impact of tariff policy.
Amanda Somerville is the founder of Quotia AI, an independent AI ethics research laboratory based in Adelaide, Australia. This paper is the first in a two-part series. The companion piece addresses the intervention mechanism.
This paper was developed in collaboration with Claude, Anthropic’s AI system. The research framework, thesis, arguments, and conclusions are the author’s own. The author endorses Anthropic’s commitment to safe and ethical AI development.
© 2026 Quotia AI. All rights reserved.

