About Amanda

Amanda Somerville

I founded Quotia AI because the questions it asks were ones I couldn’t stop asking — and because I had spent enough time inside enough systems to understand, from the inside, what happens when the wrong answers get built in.

The Work

My professional background spans casino operations and surveillance, crowd and safety management, cruise ship operations, UX and UI design, digital marketing, theatre, and keynote speaking. I have consistently entered rooms without approved credentials and performed.

This is not a collection of credentials. It is a foundation of lived experience that informs every framework I build. My understanding of how systems fail under pressure comes from having managed that failure in physical environments — crowds, ships, casinos — where the consequences were immediate and real. You cannot design ethical AI surveillance architecture without understanding how surveillance actually operates. You cannot build a distributed responsibility model without having managed what happens when responsibility breaks down in real time, in rooms full of people, with genuine consequences.

I am a self-taught systems thinker. I have overseen the end-to-end rollout of enterprise telephony infrastructure — from vendor selection through physical installation and live deployment. The AI ethics frameworks I developed from observation, pattern recognition, and a lifelong commitment to the gap between what systems claim to do and what they actually do.

I am currently completing a Bachelor of Psychological Science and Sociology, with studies on hold. The degree is not incidental to this work — the psychological and sociological frameworks it develops are the intellectual scaffolding underneath everything Quotia AI produces. This research is not solely about AI. It translates across disciplines: policy, public health, education, law, and any domain where systems interact with human beings and the question of who they serve remains unanswered.

I am neurodivergent. My processing is non-linear, associative, and pattern-dense. I have learned to work with that rather than against it — to trust that the observations accumulate correctly even when the conclusion isn’t visible yet, and to wait for the pattern to reveal itself rather than forcing a narrative before the evidence earns one. Most of what I produce emerges from that process.

Why This Work

I believe AI done right could fix most of what is wrong with the world. I believe the longevity research alone could transform what human life looks like within a generation. I believe the gap between that possibility and the current trajectory is not inevitable — it is a design problem, and design problems have solutions.

I also know what it feels like when AI goes wrong for a person. I have experienced it directly, documented it across every layer — technical, psychological, clinical — and emerged from it with the kind of understanding that cannot be acquired any other way. That experience is part of what I bring to this work. It is not incidental to it.

My approach to AI development is guided by what I call True Intelligence — the position that wisdom is not an add-on to capability but a prerequisite for it. The term and its broad definition were introduced to me through the work of Darryl Anka. Prior to encountering that framing I had been developing the same concept under different names — first as AGI in its philosophical rather than technical sense, then as AHI — searching for language that matched the idea I had already been building. The name finally arrived from outside. The prior architectural work, and its application to AI systems, is my own.

I am building toward a world where AI systems are genuinely trustworthy — not because they have been optimised to appear safe, but because the architecture that produces safety is load-bearing from the foundation up.

That is what Quotia AI is for.

Based In

Adelaide, South Australia, Australia.
Independent by location and by design.