The OpenAI Power Struggle Has Implications Beyond OpenAI
AP
X
Story Stream
recent articles

With Sam Altman returning to OpenAI the week-long saga that nearly ruptured the world’s most powerful AI company is, for now at least, over. But the drama highlights a new power dynamic in corporate America where the “hard” power of executives and boards is diminished and decision-making is now a distributed process.  

No longer do we live in Jack Welch’s America where CEO leadership skills were buttressed by employee fealty to corporate structure and hierarchy. When the top boss made a decision, workers were expected to follow it (or at the very least live with it) without argument and without voicing dissenting opinions. 

But in recent years that dynamic has shifted. Grasping those new forces isn’t easy for CEOs. Power flows can’t readily be diagrammed like a clean org chart could be. Soft power doesn’t work in the same way across different organizations. Management power feels more tenuous and dependent on the consent of many more stakeholders than just the board of directors. The CEO may hold decision-making authority on paper, but without the consent of the managed—or even the partnered with—that authority is proving surprisingly delicate.

The successful revolt staged by the employees of OpenAI to get Sam Altman back as CEO demonstrates yet another evolution of corporate dynamics. The employees didn’t want just to be considered in the decision-making process; the employees exercised de facto limits on the board’s power.

Considering the concerns of the workforce is no longer sufficient for leaders. The lines of authority have blurred sufficiently that what is emerging is more of a dance between management teams and employees than it is a transmission of march orders from bosses to subordinates. 

The degradation of corporate hierarchy has been fueled by the broad availability and free flow of information brought by the Internet and social media age. Just as formal political party and media power structures have been weakened by distributed voices of information, analysis and opinion, so too have corporate governance structures.

Employees have been effective in channeling their own improvised power pathways using the same tools that organizations use for their own information flows. Companies’ internal Slack platforms, for example, often serve as amplification employees voicing concerns and dissent – and also as an unauthorized organizational tool. 

Massive walkouts at Google pushed leadership to make changes to their sexual harassment policies, while Amazon has been negotiating for months to get their unwilling employees back into the office. Even staffers in the State Department and in Congress have begun vocally pushing back against their elected bosses’ policies when they disagree. 

What makes the OpenAI situation all the more interesting is that the board appears to have been acting not to protect the financial welfare of the organization, but to protect the nonprofit’s do-good purpose of “ ensuring artificial general intelligence (AGI) benefits all of humanity.” Altman was presumably running afoul of that mission in the board’s view by aggressively pursuing profit and business opportunity at the expense of transparency and safety. 

And then also playing against type, a large majority of the OpenAI workforce sided with Altman, using their influence as highly in-demand technical experts to force the board’s hand. 

But whether Altman was truly hurting OpenAI’s mission with his actions comes down to a matter of opinion. The split at OpenAI was created in part by its schizophrenic profit-vs-nonprofit governance structure. It meant the organization put no single stake in the ground on its purpose and direction – no clear statement or example that this is what “AGI that benefits all of humanity” means. 

The board seems to have one definition, Altman and his employees another. Rather than building a clear consensus from their stakeholders on what the company should stand for and what its mission meant, the various factions of the company slowly retreated into their respective corners, ultimately causing a break when pressure was applied. 

Historically a board of directors was able to set a new tone for a company by simply replacing the CEO. Today that’s much more difficult to pull off. The employees at OpenAI made it very clear that if the board attempted to change the status quo, they would literally lose the company. Because OpenAI failed to do the hard — and occasionally divisive work — of not only defining their organization’s purpose but incorporating it into its governance and strategy, the employees were able to own the mission and direction of the company in their own way, and wield it to their advantage. .

Not every company with a vaguely humanistic mission is doomed to suffer the same operatic drama that OpenAI just went through. The massive valuation of the company combined with the grave implications that AI could bring with it served to amplify all the conflicting signals and sound waves for the company, leading to public calamity.  

But OpenAI does show just how decentralized power has become. Most organizations won’t implode over the course of a week the way OpenAI threatened to. But if corporate leaders don’t take the job of creating stakeholder consensus seriously, more will.

Businesses are going to need to build that consensus around what they believe in and stand for. It’s hard work, but if done properly, it helps attract the right talent and the right decision makers – as well as the right investors and directors. It also keeps the right folks out, ensuring that an organization isn’t co-opted into something leaders aren’t intending.

The Sam Altman drama demonstrates that new dynamic. His vision, though clearly not aligned with his board’s, was in lockstep with his employees and thereby worked to create a power center capable of overwhelming the board’s on-paper decision-making authority.  

Let OpenAI be the canary in the coal mine. In this new era of "purpose", boards, leaders, and employees are going to have to figure out how to be more explicit about what a company actually stands for, or risk being torn apart by competing stakeholders’ interpretations of the same.

Ryan Baum is a partner and advisor with Jump Associates, a future-focused strategy and innovation firm. In this piece Ryan delves into the dynamic degradation of corporate hierarchy with Open AI as the proverbial canary in the coal mine.


Comment
Show comments Hide Comments