The Drivers Behind Sam Altman’s Ousting

Nov 27, 2023
The Drivers Behind Sam Altman’s Ousting

The OpenAI Ousting of Sam Altman Revealed That Artificial General Intelligence is Much Closer Than Anyone Thinks

The last 10 days in the world of artificial intelligence have been absolute chaos.

At the center of the controversy was an ousting that made absolutely no sense at all. But what was happening behind the scenes was so much more interesting…

In fact, the latest developments are so significant, they will change all of our lives far sooner than we think. And not knowing about what’s happening right now means being unprepared for the near future.

But before we dig in, some quick housekeeping…

If this is your first time tuning in at the new Brownridge Research, welcome. You’ve made it to the Outer Limits, my new e-letter for trendspotting and keeping track of important happenings in the fast-changing world of high-tech, the markets, and the perpetual chaos that now surrounds us.

We’ve only just launched and are continually welcoming feedback and questions. You may submit your own right here. And you can catch up on the latest issues here and here.

Now, as for the world of AI...

On November 17th, Sam Altman, CEO of AI giant OpenAI, was fired by the OpenAI board of directors. Three hours later, Greg Brockman, President of OpenAI, resigned as well.

As a reminder, OpenAI is the for-profit entity that unleashed ChatGPT on the world almost a year ago to the day (released on November 30th, 2022).

ChatGPT, in its current format, is a very advanced generative artificial intelligence (AI) built on what’s called a large language model (LLM). The development of ChatGPT was a breakthrough — one that will mark a historical inflection point in time.

LLMs are trained on massive data sets of language. They are capable of analyzing and “understanding” the information they are trained on… and then draw on that information to generate cohesive, detailed, and accurate responses when given prompts or questions.

The applications for LLMs are nearly endless. They can be used to generate ideas, quickly answer questions about almost any topic, write essays, write songs, review legal agreements, write legal agreements, analyze medical records, write software code, generate a website, etc.

While OpenAI’s ChatGPT may appear to be a novelty to some, it's already a remarkable business. Since launching ChatGPT, OpenAI has generated more than $1 billion in revenue. Current 2023 forecasts for ChatGPT, which is now based on the more advanced GPT-4 large language model, are around $1.3 billion.

Imagine that. Going from basically zero revenue prior to November 30th, 2022, to $1.3 billion in revenue in 2023. Zero to $1.3 billion… and growing.

Altman clearly created immense economic value for OpenAI with even more economic upside to come, which is why the ousting was nonsensical…

Better yet, Altman had managed to raise $3 billion from Microsoft early on, and then another $10 billion from Microsoft at a $29 billion valuation in January of this year. Early investors in OpenAI, on paper, had made a fortune already under Altman’s leadership.

And this month, Altman was in the process of striking a secondary sale of OpenAI shares with Thrive Capital at an $86 billion valuation.

Needless to say, these are extraordinary returns, and a strong indication of what the technology is worth.

Which brings us to the why…

Headed for Trouble

Something incredible had been happening in the background of the OpenAI story — something I predicted would happen this year.

OpenAI researchers had another major breakthrough with its artificial intelligence — something the team calls Q* (pronounced “q star”).

Q* is a breakthrough because it has the ability to solve things like mathematical problems that historically were outside the realm of today’s large language models (LLMs). While the ability of Q* is currently limited to grade school-level problems, the key point was that the AI demonstrated the ability to reason.

As with AI research, once a key breakthrough happens with a small model, improvements come quickly. It’s just a matter of expanding the training set and throwing more computational power at the AI model. More computational horsepower for training quickly results in improved AI.

And with the ability to reason came the quick realization by industry insiders that the world just took a large step toward OpenAI’s ultimate goal — artificial general intelligence (AGI).

We can think of an AGI as software that is smarter than human intelligence and capable of performing just about any task that humans are capable of doing. An AGI has the ability to think, reason, and solve problems in the same way that we do.

That’s why things got very interesting — and contentious — at OpenAI. Q* demonstrated a path towards becoming the first company to build an AGI. And that means immense power, control, and of course money — lots of money.

With dynamics like that, there will always be trouble.

Jockeying for Control at OpenAI 

Altman knew the significance of Q* and what would follow.

In fact, the day before his firing, he made a public statement at an event in Oakland, CA that jumped out at me. He said, “the model capability will have taken such a leap forward that no one expected” when referring to what OpenAI would announce in 2024. It wasn’t even cryptic. Altman was bold enough to refer to the breakthrough out loud.

What most people didn’t see, however, was that in the background of the development, a small cohort of OpenAI employees communicated fear, uncertainty, and doubt about Altman’s leadership, claiming that the research was unfettered and being done irresponsibly without any safety controls in place.

The board’s explanation for the firing was based on a “breakdown in communication between Sam and the board,” which the board clearly blamed on Altman.

It's worth stepping back here to reemphasize the value of owning and controlling an AGI. Some have argued it’s a national security risk, and for other nation states like the U.S. and China, it is a national priority to have one. 

A company that wins the race to AGI will easily exceed a $1 trillion valuation and become immensely powerful. And while I’m speculating, I have to believe that there were political/governmental forces at play here, as well.

Altman’s tight control and leadership over the development and direction of OpenAI’s technology was clearly seen as a threat to the board, and perhaps other powerful forces...

A path to AGI controlled by one entity, or one strongly opinionated leader, would be seen as too much power — a power that a government wants to control, perhaps by a proxy. 

Also factoring into this circus is the elephant in the room: Microsoft.

Microsoft's Smoke & Mirrors Strategy at OpenAI

Being the dominant investor in OpenAI, and the only company to have access to OpenAI’s software code, Microsoft is in a unique position as an insider. And I would argue, in the short to near term, Microsoft has effective control over OpenAI. 

This isn’t openly discussed though, and OpenAI and others argue against this position. But this is only on paper, because of how clever Microsoft was in structuring its investment…

In exchange for its $13 billion, Microsoft gets exclusivity on the OpenAI code, and OpenAI has to pay Microsoft for all of its cloud computing requirements. Said another way, OpenAI runs all of its AI on Microsoft’s Azure cloud computing system.

Microsoft will receive 75% of all profits generated from OpenAI until its investment has been fully repaid. And then once that has happened, Microsoft will own 49% of OpenAI.

This might sound crazy to do a deal like this. Again, when Microsoft invested $10 billion in January, it was at a $29 billion valuation for a company that had almost zero revenue.

The key desire, however, was to have access to the code, and the ability to deploy that code across Microsoft’s enterprise and consumer software platforms.

And the deal was smartly structured in this way to avoid any regulatory issues from the U.S. government. By not taking an official board seat, and only ending up with 49% once its investment had been repaid, on paper, Microsoft can say that it has no control over OpenAI, thus avoiding unwanted scrutiny.

This is such a smart “smoke and mirrors” play by Microsoft, which itself is effectively a monopoly.

What was interesting… was that it appeared that Microsoft was caught off guard with Altman’s termination.

An Incredible Circus

With so much at stake, the stakeholder response to Altman’s ousting came as no surprise.

Major investors in OpenAI like Sequoia Capital, Tiger Global Management, venture capitalist Vinod Khosla, former Google CEO Eric Schmidt, and others came out in strong public support of reinstating Altman.

Soon after, more than 700 of the OpenAI’s employees signed an open letter that they would quit OpenAI if Altman and Brockman weren’t immediately reinstated, and the existing board replaced. 

Naturally, the investors were in panic mode: Tens of billions were at stake, and if the company collapsed, so would their investments.

Microsoft quickly announced that it was hiring Altman and Brockman to lead a new AI “research unit.” That meant that nearly any OpenAI employee who resigned from OpenAI would have a home at Microsoft under the two executives.

The key point here is that Microsoft would win no matter what. After all, it already had OpenAI’s software code…

And then, last Tuesday the 21st of November, the news came out that Altman and Brockman were reinstated, and a new board was being put in place. The only “concession” Altman made was that he agreed to an investigation about Brockman and himself, and what alleged actions of theirs led the board to the desire to oust them in the first place. I doubt the investigation will find much at all.

One thing is clear: This is a complete failure of the OpenAI board. People, especially those that aren’t actually doing the work, tend to make stupid decisions based on imperfect information.

Very few board members are willing to get their hands dirty and understand what actually is, or isn’t, happening within a company. If the board was really that concerned, they should have conducted a professional investigation led by a third party before considering to take any action.

And it goes without saying that they clearly didn’t consider the economic implications of the decision they made — as surprising as that sounds. The simple fact that they didn’t consult their largest shareholder — Microsoft — ahead of the firing shows the full extent of the board’s dysfunction and incompetence. 

The end result was appropriate. The old board was terminated. They literally risked $86 billion dollars of present value, and even more future value — perhaps a $1 trillion plus company. They were not acting in the best interests of the shareholders or the employees. 

And despite this incredible circus, which the media is so heavily focused on, the real story is going untold…

Artificial general intelligence (AGI) is much closer than anyone thinks.

It will radically change our lives. Wherever it can be deployed quickly with economic value, it will be. And we’re not going to have to wait a decade — or even five years — for that to happen.

Back in 2019, I made a very public prediction that we would see AGI by 2028. If you’d like to watch my interview with Glenn Beck, you can see it right here.

At the time, this was an extremely unpopular prediction of mine. AI “experts” were forecasting 2035-2050 for AGI back then. Some thought I was ignorant or crazy.

Not anymore. I’m actually in the process of reassessing my own prediction…

Put simply, it's likely I was too conservative.

Previous Post Next Post