top of page

What Nonprofits Should Learn from OpenAI's Transformation

  • Feb 28
  • 3 min read

We just finished Karen Hao's Empire of AI. For a company dedicated to helping nonprofits use technology for good, the book landed HARD. Hao documents how a nonprofit founded to "benefit humanity" became one of the most strategically important companies in the world. The transformation wasn't driven by bad intentions. It was driven by the logic of scale.


From nonprofit to $90 billion valuation in eight years

OpenAI began in 2015 with commitments to openness and shared benefit. By 2023, it was negotiating a $10 billion investment from Microsoft and briefly firing its CEO over governance disputes he quickly won.


Hao traces this through hundreds of interviews conducted since 2019. The book covers internal ideological fractures, the November 2023 board crisis, and the quieter shifts: how safety debates became budget negotiations, how "benefit humanity" became a competitive slogan, how governance structures adapted to capital requirements rather than the reverse.


What does "benefit humanity" actually mean?

This is where Hao's reporting gets uncomfortable.


The stated mission was universal benefit. The reality involves Kenyan workers paid $2 per hour to label violent and disturbing content so ChatGPT appears safe. It involves data centers consuming millions of gallons of water in drought-prone regions. It involves training data scraped without consent or compensation.


"Good" is not abstract. It has a supply chain.


When your organization says it wants to use AI "for good," what does that mean in practice? Are you checking where the compute comes from? Who labeled the training data? What environmental costs were externalized? Whose consent was obtained?


Most mission statements do not survive contact with infrastructure questions.


Why this matters for nonprofits

Most organizations we work with are not building frontier AI systems. They are procuring them. Integrating them into operations. Depending on infrastructure they do not control. Hao's global lens makes this concrete. She connects Silicon Valley engineers with Kenyan data annotators and Chilean environmental activists. The "cloud" has geography. The intelligence has supply chains: data centers, water consumption, energy grids, low-paid labor cleaning training data.


When you deploy an AI system, you are connecting to this infrastructure. The question is whether you understand the power relationships embedded in that connection.


The concentration problem

Hao shows AI as empire-scale infrastructure. Compute is concentrated. Capital is concentrated. Talent is concentrated. Standards are shaped by a handful of companies whose decisions ripple across institutions and societies.


This is not abstract. When an organization controls critical infrastructure, it shapes information flows, economic advantage, and public administration. It becomes geopolitical.


For nonprofits, vendor relationships are not merely procurement decisions. They are governance decisions.


Questions to ask now

The November 2023 board crisis revealed how much depends on informal power dynamics. When systems sit at the center of your operations, who can contest decisions? What mechanisms exist for accountability? What happens when commercial pressure conflicts with stated mission?

If your organization is integrating AI into core operations:

  • Who owns the infrastructure you depend on?

  • What happens if that vendor changes terms or priorities?

  • Can you move your systems if governance shifts?

  • Do you have standing to contest decisions that affect your operations?


These are structural questions. Most nonprofits we encounter have not asked them.


What this means for your work

Empire is built through infrastructure decisions: who owns the compute, who sets the standards, who performs the hidden labor, who bears the environmental cost. The lesson for nonprofits is not to avoid these systems. It is to engage them with clear understanding of the dependencies you are creating.


Infrastructure can be designed differently. But only if the institutions using it understand the power relationships they are entering.


That understanding is where governance starts.


How Mission AI can help

We work with nonprofits to map these dependencies before they become crises.


Our AI governance assessments help you understand what you are actually procuring when you buy AI services. We trace the infrastructure: who controls it, what happens if terms change, whether you can exit if governance fails.


We help organizations ask the questions that vendor demos skip: Where does the training data come from? Who labeled it? What are the environmental costs? Can you move if this relationship becomes extractive?


If you are integrating AI into your operations and want to do it with the highest ethical standards and practical governance, let's talk.

 
 
 

Recent Posts

See All

Comments


© 2023 by Mission AI. All rights reserved.

Subscribe to Our AI Insights

Connect With Us:

  • LinkedIn
bottom of page