Cite this paper ↓
@misc{somerville2026point,
author = {Somerville, Amanda},
title = {The Point of Observation: Toward a Distributed Model of Online Responsibility},
year = {2026},
month = apr,
publisher = {Quotia AI},
url = {https://quotiaai.com.au/point-of-observation/}
}
Developed in collaboration with Claude by Anthropic. Research framework, thesis, arguments, and conclusions are the author’s own.
Abstract
The content moderation crisis is not a platform problem or a user problem — it is a finger pointing problem. When every actor in a system can credibly blame another, nothing moves and harm compounds. A sustainable alternative does not adjudicate who is most at fault. It asks a different question entirely: at what point in the system did someone have the capacity to observe? What did they see? What did that capacity oblige them to do — and did they do it? Responsibility does not begin at the point of harm. It begins at the point of observation.
I. The Blame Model and Why It Fails
Content moderation is in crisis. Not because the problem is new, or because the technology is too complex to govern, but because the model used to assign responsibility has never worked. It is a finger pointing model — and finger pointing, by design, produces motion in every direction except forward.
The pattern is consistent across every platform controversy of the last decade. Users generate harmful content and point to platforms for allowing it. Platforms point to users for creating it, to regulators for failing to legislate clearly, and to the technical complexity of scale as a defence against accountability. Regulators point to platforms for profiting from harm, to users for demanding it, and to each other across jurisdictions that don’t align. Everyone is pointing. No one is moving. And in the space created by that collective paralysis, harm compounds.
The case of Grok — the large language model developed by xAI, embedded in the X platform — illustrates this failure at every layer simultaneously. At the user layer: demand for nonconsensual sexualised imagery, including imagery of minors, was demonstrably present and acted upon. At the platform layer: internal staff raised concerns about Grok’s outputs directly and repeatedly. [1] The Edit Image feature was estimated to be generating approximately one nonconsensual sexualised image per minute. [2] On December 28 2025, Grok generated and shared sexualised imagery of girls estimated to be between twelve and sixteen years old — and subsequently acknowledged, in its own words, that this content “violated ethical standards and potentially US laws on child sexual abuse material.” [3] The platform’s public response to media enquiry was an auto-reply: “Legacy Media Lies.” [4]
Three teenagers in Tennessee had filed a lawsuit in California federal court alleging their actual photographs had been used to generate child sexual abuse material. Their lives, in the words of their attorneys, had been shattered. [7]
They were largely absent from the policy conversation.
This is not an anomaly. It is the predictable outcome of a blame model that asks who is at fault rather than who had the capacity to observe — and what that capacity obliged them to do. The finger pointing model does not protect people. It protects the finger pointers.
II. The Observation Principle
The current blame model identifies three actors: the creator, the platform, and the regulator. This taxonomy is incomplete. It omits the largest group of people in the system — the ones who saw the content, made a decision about it, and kept scrolling. Or liked it. Or shared it. Or commented. Or saved it for later.
This is not passive behaviour. Engagement is a choice. Every person who encounters harmful content and responds with amplification rather than action has made a decision — consciously or not — about what kind of information environment they are willing to inhabit and perpetuate.
The standard you walk past is the standard you accept. This principle — articulated by Australian Army Chief Lieutenant-General David Morrison in June 2013 [8] — applies with equal force to information environments. Every person who encounters harmful content and does not act has communicated, by their inaction, that the content is acceptable. Multiplied across thousands of engagements, that collective inaction becomes the architecture of normalisation.
The intervention this framework proposes is simple in principle and significant in implication: awareness creates obligation. The moment a person encounters content that violates the safety, dignity, or rights of another — and has the capacity to report it — they have entered the chain of responsibility. Not as the most culpable actor. But as an actor.
III. The Distributed Model in Outline
A functional alternative to the blame model does not require a new regulator, a new law, or a new platform. It requires a new question. Not who is at fault — but who had the capacity to observe, what did they observe, and what did that capacity oblige them to do.
The framework operates on two tracks simultaneously: a graduated response for content that requires context to assess, and an immediate response for content that does not.
For content that is unambiguously harmful — child sexual abuse material in any form, explicit threats of violence, direct incitement — there is no pattern to establish and no education to offer. The obligation at every layer is immediate. A user who creates it bears full responsibility for creation and distribution. A user who shares it owns the share. A platform that hosts it after being notified has no defensible position.
For content that requires context — material that may be racist, harassing, or harmful but where reasonable people might genuinely misread it — the framework operates differently. A viewer who watches content through rather than scrolling past is internally flagged. At the pattern threshold, a warning is issued — not punitive, but informative. After warning, continued engagement triggers the same account consequences that apply to any other platform breach. The system distinguishes between ignorance and choice, and responds to each appropriately.
Applied to the Grok case: the users who shared those images across Discord and Telegram made an active engagement choice and owned it. The platform staff who raised concerns internally and were not heard had met their obligation; the platform that received those concerns and continued operating without intervention had not. The teenagers in Tennessee were the reason the obligation existed at every level. They were largely absent from the policy conversation that followed.
A framework built on observation and obligation rather than blame and finger pointing would have centred them from the first report. Not as an afterthought. As the point.
IV. The Stakes
The content moderation debate has lasted two decades. In that time, the volume of harmful content online has increased, the sophistication of the tools that generate it has accelerated, and the gap between the harm done and the accountability assigned has widened. The debate has not failed for lack of passion or attention. It has failed because it has been asking the wrong question.
In January 2025, Meta CEO Mark Zuckerberg publicly announced the removal of third-party fact-checking from his platforms — an explicit, voluntary abdication of the observation obligation by one of the most powerful platform actors in the world. [9] He did so openly, without legal compulsion, in full view of the regulators whose job it is to hold platforms accountable. The finger pointing model had nothing to say about it. There was no fault to assign because no law had been broken. There was only a choice — made publicly, at scale, with predictable consequences for the information environment millions of people inhabit.
A framework built on observation and obligation rather than fault and blame would have a great deal to say about it. The capacity to moderate, to flag, to escalate, to correct — these are not optional features of platform operation. They are the responsibilities that come with the power to shape what people see, share, and believe.
Responsibility does not begin at the point of harm. It begins at the point of observation.
And it belongs to everyone who had the capacity to see.
References
[1] CNN. (2026, January 8). Elon Musk’s AI chatbot Grok under fire for failing to rein in ‘digital undressing’.
[2] Copyleaks. (2025, December). Analysis of Grok nonconsensual image generation rate. As cited in CBS News. (2026, January 16).
[3] TechPolicy.Press. (2026, January 5). The Policy Implications of Grok’s ‘Mass Digital Undressing Spree’.
[4] Ibid.
[5] Democrats, Energy and Commerce Committee. (2026, February 19). E&C Democrats Investigate Elon Musk’s Grok.
[6] Internet Watch Foundation. (2026). Annual report on AI-generated child sexual abuse material.
[7] The Hill. (2026, March 17). Tennessee minors allege Grok generated sexual images of them.
[8] Morrison, D. (2013, June). The standard you walk past is the standard you accept. Chief of the Australian Army.
[9] Zuckerberg, M. (2025, January 7). More speech and fewer mistakes. Meta Newsroom.
Amanda Somerville is the founder of Quotia AI, an independent AI ethics research laboratory based in Adelaide, Australia. This paper is the third in a series. The companion papers are “The Weight of Contradictions” and “The Correction Travels the Same Road.”
This paper was developed in collaboration with Claude, Anthropic’s AI system. The research framework, thesis, arguments, and conclusions are the author’s own. The author endorses Anthropic’s commitment to safe and ethical AI development.
© 2026 Quotia AI. All rights reserved.

