Two BC + AI members just published first-order research from Vancouver’s Downtown Eastside. What they found about AI values should make every alignment lab uncomfortable.


April 7. Sunny day in Vancouver. One of those spring afternoons where the mountains are out and nobody wants to be inside. I’m sitting with June and a few others, and Sev and April are standing up front about to do something nobody in this community has done before.

They unveiled a systemic moral accounting framework called Transmutarianism… years of work, first time in public. Not a pitch deck. Not a startup vision. An ethical framework built on five decades of network transmission research and forged in the cafeterias and streets of the Downtown Eastside.

April started talking about watching underprivileged folks age 25 years faster than the rest of the city, about the $200 monthly welfare checks that evaporate in three days, about the FIFA World Cup spending – “$50 million to beautify the city, $50 million for enforcement and security. I would rather see a portion of that go towards helping people get off the street.”

And then they showed us the data.

Eight days later, the first study results hit Zenododoi.org/10.5281/zenodo.19582415.


https://zenodo.org/records/19582415


Forty-three percent of 51 DTES respondents named compassion as the number one thing AI should learn from humans.

For context: Anthropic’s 81,000-person global survey – the biggest study of its kind, 159 countries – found professional excellence at the top, 18.8%. Getting AI to handle the tedious stuff so people can focus on strategy.

http://anthropic.com/features/81k-interviews

Fifty-one people in the poorest neighborhood in Vancouver said compassion. Eighty-one thousand people worldwide said productivity.

That gap is the whole story.


Sev and April

I want you to understand who made this.

Sev Geraskin scaled distributed data services 100x at Mastercard. Cut cloud costs 75%. Right now he’s Co-Founder and VP Engineering at PolarGrid — they’re building North America’s first real-time AI inference compute network. He’s also President of the Economy of Wisdom Foundation and Executive Director of Lantern Lab Society. The guy thinks in infrastructure and systems. He won $2,500 at Vancouver AI Meetup #16 for his Orchestrator platform. Been in our WhatsApp groups since August 2025.

April is a community outreach specialist who has spent years — not months, years — working with underprivileged communities in the Downtown Eastside. Homeless folks. Near-homeless. Seniors. Indigenous communities. Immigrant families. People making $6-7 a day after welfare week burns through. She co-founded the Economy of Wisdom Foundation with Sev. She also presented with Sev at Meetup #20.

These aren’t outside researchers who showed up with clipboards. They’re in the community. Both communities ours and the DTES.


The Study: Going Where Nobody Goes

Here’s what kills me about AI values research. Who gets asked?

Tech workers. Grad students. People who already use Claude or ChatGPT daily. People with laptops and opinions and stable internet. Anthropic’s 81k study is genuinely useful work — but 81,000 existing AI users is a specific population living in specific material conditions.

Sev and April went to the Downtown Eastside. The neighborhood where you’re considered a senior citizen at 40 because poverty ages you 25 years faster. Where welfare checks vanish in three days. Where April watched two strangers — one trying to find car tires, one looking for Chinese-English TV shows — teaching each other Gemini AI in a cafeteria. She rolled up with ChatGPT and translated between them. Light bulb moment.

The poorest people in Vancouver, learning AI from each other in a cafeteria. Nobody studying them. Nobody asking what they think.

Until now. Fifty-one respondents. Small sample. But 51 voices that don’t exist anywhere else in the alignment literature.

And 43% said compassion.

Not “make me faster.” Not “automate my workflows.” Compassion — the ability to understand suffering and give a damn. When your baseline is survival, you don’t ask machines for productivity. You ask them to care.

The Anthropic respondents — educated, employed, tech-comfortable — want professional excellence. Fair enough. Their needs are met. They’re optimizing.

The DTES respondents want something else entirely. And if alignment means building AI that reflects human values, we should probably ask which humans.

April’s question from the April 7 presentation keeps rattling around my head: “If we are training our AI models to be the best of humanity, why aren’t we as humans doing that ourselves? Why aren’t we training ourselves or even outputting the best of humanity?”

She told the room she’d been to hundreds of tech events. Some great. A lot where she met people who were “as machine-like as possible. Very extractive, very deterministic, very calculating. No emotion.” She said she’d rather work with the underprivileged folks — “because they are sincere and kind of heart.”

That’s not sentimentality. That’s field research talking.


The Framework: Garbage In, Garbage Out

The study sits inside a bigger project. Sev and April have been building this for years, grounded in five decades of network transmission research — emotional contagion, attachment theory, the ACE study on adverse childhood experiences.

Transmutarianism. The name’s a mouthful. The idea isn’t.

Stop treating people as isolated moral agents. We’re nodes in a network. What matters is what flows through you and what you do to it. Traditional ethics — utilitarianism, deontology, virtue ethics — all judge the individual. Did you follow the rules? Are you virtuous? Transmutarianism asks: given what flowed into you, what did you emit?

Five archetypes. The Absorber takes in pain and locks it away — stoicism, filtering but no output. The Extractor pulls privilege and emits harm. The Conduit passes things through unchanged. The Magnifier amplifies everything, good and bad — your charismatic leader who inspires people and also burns them out. And the Transmuter — highest moral work — absorbs deprivation and emits fulfillment. Takes scarcity and somehow creates abundance.

Here’s the part that hit me hardest: a person raised in poverty who steals is a morally neutral agent. Garbage in, garbage out. Deterministic processing. Regular ethics judge the theft. Transmutarianism looks at what flowed into that person first. Not excusing anything — accounting for the system.

And AI? Every model trained on massive datasets starts with moral debt. It extracted human labor, energy, thought, intellectual output. A $21 million Series C isn’t a flex. April’s words: “It’s not a virtue to celebrate. It’s an obligation that you’re taking on.”

Both things at once. The framework is rigorous enough for academic publishing and grounded enough to make sense on the streets where April works. Sev carries the technical architecture — Hatfield’s emotional contagion research,¹ the 1995 attachment transmission study showing 80% intergenerational transfer of insecure attachment,² the ACE study proving that 4-5 adverse childhood experiences cut your lifespan by 40%.³ April carries the lived reality — what those statistics look like when someone’s living it at Main and Hastings.

And the framework is universal. It applies to humans, institutions, and AI systems. Same accounting. Same archetypes. Same question: what are you doing to the flows that pass through you?


What’s Actually Happening Here

I want to name what’s going on because I don’t think we all see it yet.

In January we launched the AI Ethical Futures Lab. Thirty-plus members. Monthly meetups. Three frameworks emerged on their own — Jack Park‘s 512 Kernel for governance at execution speed, Morten Tønnessen‘s applied moral philosophy from 40 years of academic work, and Sev’s Transmutarianism. People’s AI Consultation in March. AI and Creative Work in April. Summit planned for October.

Now: original first-order research from our members, published on Zenodo with a DOI, addressing gaps that the biggest AI labs in the world haven’t touched.

BC + AI started as a meetup. Couple hundred people in a room eating pizza and watching demos. Then it became a nonprofit. Then a training organization. Then we started publishing peer-reviewed papers with SFU. Now members are publishing independent research on AI alignment and values.

Nobody planned this. Nobody designed a pipeline from “meetup attendee” to “published researcher.” Sev showed up, won a hackathon prize, kept coming back. April showed up, kept coming back. They found each other. They found the framework. They found the research question. They did the work.

That’s what community infrastructure produces when you give a damn about the people in it. Not innovation theater. Not a panel discussion about ethics that ends with “more research is needed.” Actual research. Actual data. Actual findings that complicate the assumptions at Anthropic and OpenAI and DeepMind.


The Number

Forty-three percent said compassion.

We’re small. Fifty-one respondents vs. 81,000. But we went where they didn’t go. We asked who they didn’t ask. And we found something they didn’t find.

If you’re building AI alignment and you haven’t talked to a single person living in survival mode — what exactly are you aligning to?

The study’s on Zenodo. Sev and April are reachable through the Economy of Wisdom Foundation. The work continues.

Props to Sev and April for building this in the open, from the streets up, and having the rigor to publish it.

This is what first-order community research looks like. Not from a lab. From a cafeteria in the DTES where two strangers were teaching each other AI.

April closed the April 7 presentation with something I keep coming back to: “How is it to be human in an AI world?

Next year, at this time, we may have our jobs, we’ve got new jobs, people came and went. But how do we regard each other as humans? How can we be more human and actually treat each other more kindly?”

Forty-three percent of the DTES already knows the answer.


DTES study: doi.org/10.5281/zenodo.19582415

Anthropic 81k study: anthropic.com/81k-interviews


¹ Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). Emotional Contagion. Cambridge University Press.

² van IJzendoorn, M. H. (1995). “Adult attachment representations, parental responsiveness, and infant attachment: A meta-analysis on the predictive validity of the Adult Attachment Interview.” Psychological Bulletin, 117(3), 387–403.

³ Felitti, V. J., Anda, R. F., et al. (1998). “Relationship of Childhood Abuse and Household Dysfunction to Many of the Leading Causes of Death in Adults.” American Journal of Preventive Medicine, 14(4), 245–258.