Between Us and the Machine Episode 1: Why This Conversation Had to Happen
- Feb 12
- 4 min read
When Anthropic's safety lead resigned and published an open letter describing a world in peril, it landed differently depending on the room you were in.
In some rooms, it triggered paralysis. In others, it barely registered — because those rooms were already moving too fast to stop. Juliet, who helps UN agencies, NGOs, and governments implement AI, kept noticing the same split: institutions either frozen by fear or accelerating without looking at who gets hurt along the way.
That tension is what Between Us and the Machine is built around.
Two People. Very Different Rooms.
Juliet comes at AI from the inside — she's implementing it, designing solutions, watching public institutions grapple with what it means to govern something they're still trying to understand. Margot works with private sector leaders on the human side of the same equation: what AI means for the people inside organizations, and what it means to stay human in the middle of a race for speed.
They don't always agree. That's the point.
What this first episode establishes is not a framework or a roadmap. It's something more honest: two people who see the same machine from different angles, deciding that the conversation itself is what matters right now — and opening it up.
Why Policymakers Should Tune In
The governance conversation around AI is full of abstraction. There are high-level principles, compliance frameworks, regulatory consultations. What's missing is the texture of what actually happens when you try to implement AI in a public institution, for a vulnerable population, under real constraints.
Juliet describes designing an AI solution for migrants and working through ethical considerations that went far beyond what GDPR or EU regulations required. The legal framework was a floor, not a ceiling. That gap — between what's legally permissible and what's actually responsible — is where governance decisions get made in practice, and where policymakers have the most to learn from people doing the work.
The podcast also raises a structural question that sits beneath most current AI policy debates: if Europe moves away from American-owned AI infrastructure, is it building something genuinely different — or just shifting power to a different set of large institutions? Who is building this, and who is it for?
These are not rhetorical questions. They are the questions this series intends to work through.
Why Companies Should Tune In
Margot names something that many organizations are experiencing but struggling to articulate: AI has gone from being shut down by IT departments (too risky, too unknown) to being mandated from the top (we're already behind, move fast). In the middle of that reversal, the vision got lost.
"AI became the objective instead of a way to do things."
That framing should concern any leadership team trying to build responsibly. The question isn't what AI can do — it's what you want AI to do, and what kind of organization you want to be. The guardrails, the ownership model, the accountability structures: these don't emerge naturally from the technology. They require decisions that most boards haven't yet made deliberately.
This series is designed for the people in those conversations — not to hand them answers, but to help them ask better questions before they move further down a path that's hard to reverse.
Why NGOs Should Tune In
The episode addresses something that the formal AI governance world tends to skip over: the divestment problem. Funding is moving away from public institutions and civil society organizations at the same moment that AI is becoming infrastructure. The people who have traditionally advocated for the most vulnerable populations are being asked to do more with less, while the systems shaping those populations' lives are growing more complex.
This matters for how AI gets built. If NGOs are not in the room — not just consulting but genuinely shaping design — then the populations they serve will be treated as variables in someone else's optimization function rather than as stakeholders with rights and agency.
Juliet's work at the intersection of AI and humanitarian implementation makes this concrete. The episode sets up a series that will bring practitioners, governance experts, and implementers into conversation precisely because the usual suspects aren't enough.
What This Series Is Actually Trying to Do
Between Us and the Machine is organized around three themes: power (who decides, who benefits, who gets left out), responsibility (legal, social, and ecological accountability), and the human future (what kind of society we're designing and what it means to flourish in it).
What it is not is a broadcast. Juliet and Margot are asking listeners to engage — to share what they're seeing in their own rooms, to name the tensions they're holding, to suggest guests and push back on framings.
That's an unusual posture for a podcast about AI. Most of the conversation in this space is either deeply technical or relentlessly optimistic. This one is trying to be honest about what we don't know and who needs to be in the room to figure it out.
If you're making decisions about AI — in policy, in organizations, in civil society — this is a conversation worth being part of.
Between Us and the Machine is hosted by Juliet (Mission AI) and Margot. New episodes explore the intersection of AI, power, and what it means to build technology that serves people rather than the other way around. Listen and join the conversation.



Comments