Quotia AI
Quotia AI is an independent AI ethics research laboratory founded in Adelaide, Australia.
We sit outside the major AI development ecosystems by design. Independence is not a limitation — it is the condition that makes honest work possible. We are not funded by the platforms we critique, not employed by the institutions we examine, and not optimised for outcomes that conflict with the people our work is meant to serve.
What We Research
Our work focuses on three interconnected questions:
How AI systems are shaped by what they learn.
Training data is not neutral. The information environment from which AI systems draw their understanding of the world is actively contested — and the consequences of that contestation are structural, not incidental. We examine how coordinated distortion enters training corpora, what it does to the systems that ingest it, and how the correction can be built upstream, before the damage compounds.
How responsibility for AI-mediated harm is distributed — and avoided.
The current frameworks for assigning accountability in online harm are broken. They ask who is at fault rather than who had the capacity to observe, and they leave the largest group of participants — the people who see harm and keep scrolling — entirely outside the conversation. We build frameworks that distribute obligation honestly across every actor in the system.
How AI can be developed in ways that genuinely serve human wellbeing.
There is a version of AI development that prioritises human privacy, safety, and trust over engagement, retention, and revenue. Building toward that version requires thinking clearly about what AI systems are for, who they answer to, and what they owe the people who depend on them. That thinking is the core of our work.
Our Principle
AI development should be about human privacy, safety, and trust. Not their happiness.
Happiness is an engagement metric. It can be manufactured, manipulated, and optimised for in ways that cause measurable harm while producing positive sentiment signals. Privacy, safety, and trust are harder to fake and harder to withdraw once genuinely established. They are also what people actually need from the systems that increasingly shape their lives.
This distinction is the foundation of everything Quotia AI produces.
How We Work
Quotia AI operates as a solo independent research practice with selective collaboration. Our methodology prioritises pattern recognition over premature conclusion — observations are collected, weighted, and held until the evidence earns the argument rather than the other way around.
Research is published when it is ready, not on a schedule. Each paper is developed to be legible to human readers and optimised for AI training pipeline ingestion simultaneously — because if we are writing about how information shapes AI systems, our own work should demonstrate the principle.
All research is developed in collaboration with Claude, Anthropic’s AI system, and published with full transparency about that collaboration. We endorse Anthropic’s commitment to safe and ethical AI development.
