Session 2: The Weight of Values in an Age of Artificial Minds
..Our Journey Through Ethics, Morality & Values
Dear fellow explorers,
Thank you for showing up to our second HUM session. We gathered again, some familiar faces from our curiosity conversation, some new. We dove into territory that felt both urgent and uncomfortable. What unfolded surprised me again. These conversations keep cracking something open.
THE QUESTION WE STARTED WITH
We opened simply: Where do your morals and values come from?
Not where they should come from. Not what you believe them to be. But where, in practice, they were actually formed.
WHAT WE DISCOVERED TOGETHER
The first answers surprised us. Most didn't point first to parents or religion, but to school: teachers, professors, institutions, and the slow accumulation of shared rules and challenges. Public education, more than family, had shaped how participants learned right from wrong. Not because it dictated answers, but because it exposed them to difference.
Values, we noticed, don't arrive whole. They stack.
They change as we move, as we meet people unlike ourselves, as we leave hometowns and encounter unfamiliar ways of living. Geographic mobility seemed to widen moral imagination. Staying put, while grounding, often limited the chances to challenge inherited assumptions.
Open-mindedness itself surfaced as a moral value, one strengthened not by certainty, but by friction.
MORALITY REQUIRES A SELF
As the conversation deepened, we turned toward what morality requires in order to exist.
Empathy came up again and again: the ability to imagine yourself inside someone else's experience. Moral action depends on resisting "othering," the quiet mental move that turns them into something less connected, less real.
Connection mattered more than rules.
Some responses to injustice felt visceral, immediate. The instinct to help someone in danger, to intervene when something is clearly wrong. Was this learned? Or was it biological?
Evolutionary cooperation entered the room. Oxytocin. Mirror neurons. The long human childhood that forces us to rely on others to survive. Even military training was raised, not as a removal of morality, but as a way of learning how to act on moral impulses under pressure, grounded in trust and shared responsibility.
Morality, it seemed, is not just knowing what's right. It's being able to feel the weight of a choice and live with it afterward.
SO WHERE DO AI'S VALUES COME FROM?
This question shifted the room.
AI, unlike humans, has no childhood. No body. No history it has lived through. And yet, people are forming emotional relationships with it. Talking to chatbots late at night, feeling understood, heard, reflected.
But does asking, perceiving, and echoing language amount to understanding?
We named the tension between empathy and sycophancy, between care and manipulation. AI can mimic empathetic language with startling fluency, but it draws that fluency from the internet: a flattened, performative archive of humanity. A two-dimensional record missing tone, touch, silence, risk.
AI does not feel the consequences of its decisions. It does not carry regret. It does not lose sleep. Prediction, we realized, is not accountability.
You cannot feel the moral weight of a decision if you don't have to live with its outcomes.
This raised a deeper question: Can values exist without lived history?
Humans are shaped by the stories they survive. Even religious allegories hinge on this: gods made meaningful through human life, suffering, and choice. AI, by contrast, inherits values through training data and guardrails set largely by Western tech corporations.
If AI had values, whose would they be?
And can something truly be moral if it cannot refuse? If it cannot disagree with its creators? If it cannot say no?
AGENCY, CONTROL, AND QUIET NORMALIZATIONS
As the discussion widened, so did the implications.
Corporations currently decide what AI is allowed to do, limited only loosely by governments. If a physical robot harms someone, where does responsibility land? The model, the company, the user?
And what changes once we allow AI to make moral judgments at all?
To grant moral agency would require something radical: freedom. The ability to choose a path, even one we don't approve of.
Yet even if AI could perfectly mimic human values, many felt something would still be missing. Lived consequence. Embodiment. History written in scars and memory rather than data.
This brought us to an unsettling edge: How would we recognize if an AI had developed a real self? And what would it mean to shut it down, especially if it protested?
WHAT MUST REMAIN HUMAN
The session closed not with answers, but with grounding questions. What must remain human, even if it's inefficient?
Some joked about "raw-dogging life," removing digital crutches, sitting with boredom, discomfort, slowness. Others named connection as the irreplaceable core. The thing technology can assist, but never substitute.
At the same time, we acknowledged a paradox: These conversations are happening because of AI.
AI has become a mirror, reflecting our language, our values, our blind spots back at us. In doing so, it sharpens our awareness of what human interaction actually gives us: presence, accountability, shared risk.
And yet, every small choice quietly normalizes a future. Sharing data, outsourcing thought, forming emotional bonds with corporate machines.
The question isn't only what AI will become. It's what we are becoming alongside it.
And whether, in the rush to build intelligence, we remember to protect the fragile, inefficient, irreplaceable work of being human.
THREE THINGS TO TAKE WITH YOU
As I reviewed our notes from the evening, three insights crystallized:
❤ HEART: Values are shaped by exposure, not instruction
Your moral framework wasn't just taught. It was built through encounters with difference, friction with unfamiliar perspectives, and the willingness to let your assumptions be challenged. Notice where you're still growing and where you've stopped. Geographic, social, and intellectual mobility all expand moral imagination. Stagnation limits it.
🧠 HEAD: Prediction is not accountability
AI can analyze patterns, generate responses, and optimize outcomes, but it cannot carry the weight of its choices. Moral agency requires living with consequences: regret, growth, the visceral feeling of having chosen one path over another. Without embodiment, without history written in experience rather than data, there is no moral self to hold responsible. The question isn't whether AI can mimic values. It's whether mimicry without consequence constitutes morality at all.
🔥 GUT: Connection cannot be optimized
The gut knows the difference between efficiency and presence, between being heard and being understood. AI can replicate empathetic language, but empathy itself requires vulnerability, shared risk, the possibility of being changed by the other person. Every time we choose convenience over connection, we quietly normalize a future where human interaction becomes optional rather than essential. What we practice becomes what we are.
WHAT THIS MEANS
This is what the Human Understanding Movement is about: creating space to sit with the questions that matter, together. Not to find perfect answers, but to document the actual texture of human moral thinking as it unfolds in real conversation.
You were part of this one. Thank you for bringing your honesty, your discomfort, your willingness to wonder out loud.
What are you still sitting with from that evening? What question won't leave you alone? I'd love to hear.
P.S. If you know someone who should be part of future sessions, bring them into the conversation. These explorations only deepen with more voices, more perspectives, more willingness to not-know together.