From One Monopoly to Another? Why This Is the Moment for Citizens to Claim AI as a Public Good
- Feb 4
- 5 min read
Across Europe, public opinion on AI is shifting in ways that matter for how power gets organized.
According to EY's 2025 European AI Barometer, people increasingly see AI as economically useful but politically risky. They associate it with productivity and efficiency—and with surveillance, job loss, and disinformation. The data shows Europeans are not anti-technology. They are cautious about who controls it and how that control gets exercised.
And beneath this ambivalence, something deeper is changing.
European institutions are no longer talking about digital innovation in the abstract. Instead, there's a growing recognition that AI is now a constitutional and democratic issue, not just an economic one. When algorithms shape access to employment, credit, healthcare, and public services, they become infrastructure for democratic life. And infrastructure of that kind cannot be left to markets alone.
What's Driving the Shift?
The growing backlash against US tech influence in European politics has accelerated concerns about dependence - not just on foreign infrastructure, but on opaque systems that are structurally misaligned with democratic governance.
The idea that core digital infrastructure (cloud platforms, data pipelines, algorithmic systems) can safely remain in the hands of a few US corporations is losing legitimacy fast. There is a mounting sense that Europe cannot safeguard its democracies on someone else's servers, under someone else's terms of service, optimized for someone else's business model.
That's why we're seeing a policy pivot:
From regulating content to reshaping infrastructure
From market innovation to constitutional protection
From following the US model to building an autonomous, values-based one
This is not just a story about Europe "standing up" to Big Tech. It's also about the kind of future we risk walking into. Because while challenging US platform dominance is necessary, replacing one set of monopolies with another even European ones is not the answer. If AI governance simply shifts from Silicon Valley boardrooms to Brussels bureaucracies, or from US tech giants to European national champions, citizens remain in the same structural position: outside the room where decisions are made.
The Real Opportunity
The real opportunity here is bigger: to break from the idea that AI and digital infrastructure must be controlled by either states or corporations. To create a model that centers citizens—not just as users or voters, but as co-owners and stewards.
This means asking questions that have rarely been central to AI development:
What if AI were governed as a public good, not a private asset?
What if communities could control their data, shape their systems, and share in the value created?
What if infrastructure (from chips to clouds to models) were designed with democratic governance built in from the start?
These are not utopian questions. They are design questions. And they have practical answers: cooperative ownership models, public-interest technology foundations, participatory governance frameworks, open-source development with community oversight, data trusts that give citizens collective bargaining power.
At Mission AI, we work with organizations navigating this terrain. Not every client is building cooperative AI or public-interest technology. But every organization deploying AI now operates in a context where legitimacy depends on more than legal compliance. It depends on whether systems are perceived as serving people or extracting from them. Whether they concentrate power or distribute it. Whether they are designed with affected communities or imposed on them.
The EY Barometer data makes this concrete. European citizens want AI that creates economic opportunity—but not at the expense of rights, autonomy, or democratic accountability. Organizations that grasp this early can position themselves not as part of the problem, but as part of a different model.
This Is the Window and the time is now
Public awareness is rising. Political will is shifting. The idea of values-based AI is no longer marginal -it's in legislation, strategic frameworks, and public discourse. The EU AI Act, the Data Governance Act, and emerging digital sovereignty strategies all reflect this pivot. But the gap between governments talking about rights and citizens exercising power remains wide. That's why this is the moment for movement-building not just policymaking.
We need to:
Support citizen-led and cooperative AI projects that demonstrate alternative models
Push for structural reforms in how AI systems are built, financed, and owned
Insist on public participation in AI governance—not as an afterthought, but as a foundational design principle
Create procurement frameworks that reward democratic governance, not just technical performance
Build coalitions between civil society, mission-driven businesses, and progressive institutions
Because if we don't shape this transition, we risk watching power shift from Silicon Valley to Brussels without ever leaving the hands of the few.
What This Means for Organizations
For businesses and institutions deploying AI, this shift creates both risk and opportunity:
The risk: Being seen as part of the old model: extraction, opacity, concentration of power. As public skepticism grows and regulation tightens, organizations that cling to unaccountable AI will face reputational damage, regulatory scrutiny, and citizen resistance.
The opportunity: Positioning as part of the solution. Organizations that embed democratic principles into AI governance, involve affected communities in design decisions, and demonstrate genuine accountability can build trust and legitimacy in ways that create lasting competitive advantage.
This is not about corporate social responsibility as window dressing. It's about fundamentally rethinking what responsible deployment looks like in a context where AI is understood as public infrastructure, not private tooling.
Key Implications from the EY Barometer
The 2025 European AI Barometer findings point to specific tensions organizations must navigate:
Economic optimism meets political anxiety. Europeans see AI's productivity potential but worry about its democratic consequences. Organizations must demonstrate that efficiency gains don't come at the expense of rights or autonomy.
Trust is conditional. Public acceptance of AI depends heavily on who deploys it, for what purpose, and with what safeguards. Generic claims about "innovation" or "transformation" no longer persuade. People want specifics.
Transparency is expected, not optional. Citizens increasingly demand to know when AI is being used, how it makes decisions, and who can be held accountable when things go wrong. Organizations that treat this as a compliance burden rather than a design principle will struggle.
Sovereignty matters. Europeans are thinking seriously about who controls the infrastructure their societies depend on. This creates space for alternative models—if organizations are willing to build them.
The Alternative Is Harder. But It's Worth It.
AI can be a public good. But only if we build it that way.
This means more than regulation. It means creating governance structures that give people agency over the systems that shape their lives. It means experimenting with ownership models that share value rather than concentrate it. It means building institutions (cooperatives, foundations, public utilities) that can steward AI in service of democratic goals.
Mission AI works with organizations ready to engage this complexity. Not because it's easy, but because the alternative (i.e. watching power shift from one monopoly to another) is unacceptable.
If your organization is asking how to build AI systems that earn trust, align with European values, and contribute to democratic resilience rather than undermine it, this is the moment to act. The window is open. But it won't stay open forever.
Ready to build AI as a public good?
Mission AI supports organizations designing governance frameworks that center people, not just performance. If you're navigating the intersection of AI, democracy, and legitimacy, let's talk.
Contact Mission AI to explore how we can support your approach.



Comments