How can you tell if a representative of a large corporation is lying to you? They’re speaking.
It’s a cheap shot and a bit of an eye-roller, but there’s definitely a message in it. When people speak on behalf of companies we should always think long and hard about what they are saying, what they are not saying, and the context in which they are saying it.
We all know that you can’t believe everything you read, everything someone tells you, but occasionally I think we fail to apply lessons we learn in childhood to statements made by powerful people. That’s the thought that occurred to me after OpenAI announced they were altering their business structure changes so that the non-profit OpenAI entity would retain control of the for-profit company and I saw media outlets as experienced as the New York Times calling it “a victory for Musk and safety researchers”.
There’s no victory here, there isn’t even a here here. This isn’t so much a story as a clever distraction and it is surprising that this story isn’t being looked at with more cynicism given the history of OpenAI, its CEO Sam Altman, and the claims that have been made.
Two important things in understanding human behaviour are incentives and context. You need to understand the things in the environment a person (or group) inhabits that incentivise them, the behaviour do they incentivise, and the context in which they are making decisions and communicating them. So to understand OpenAI’s announcement this week lets take a very quick walk along the company’s 10 year history to see how we get to this point.
A brief reminder that The High-Tech Creative is an independent arts and technology journalism and research venture entirely supported by readers like you. The most important assistance you can provide is to recommend us to your friends and help spread the word. If you enjoy our work however and wish to support it continuing (and expanding) more directly, please click through below. For the price of a cup of coffee, you can help a great deal.
2015: Founding of a Non-Profit
The name OpenAI can seem something of a misnomer as its behaviour in recent times has never been a paragon of openness, in any way. The truth is though OpenAI was founded with high ideals and a definite mission in mind.
”Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.” - OpenAI, 2015
In 2015 OpenAI was founded with funding from a number of Silicon Valley entrepreneurs including Elon Musk, Peter Thiel among others, with a warchest of around a billion dollars and a mission to research artificial intelligence with a focus on positive human impact, safety and advancing the state of the art.
Things progressed reasonably quietly for a time with OpenAI working on research with some incredibly talented researchers. In 2019 there was a major shift however when Sam Altman stepped down as president of Y-Combinator to take up a role as CEO at OpenAI and alter its structure in a fairly unique way.
It’s impossible to say for sure what triggered this change however it is possible to speculate based on what had happened in the intervening period and what has happened since. In 2017 Google published the landmark paper “Attention is all you need”, describing a style of neural network that utilised an “Attention” mechanism in order to change how it focused on data being fed into it. This innovation would go on to be the bedrock of the modern AI industry, making possible the large language models that power its biggest achievements. In 2017 however, this was all still far in the future.
A year or two later however, as Sam Altman took the reins of OpenAI, it’s safe to assume that talented researchers working within the industry (of which OpenAI had quite a few) would have not only performed their own experiments with this technique but would have started to come to the realisation of the sorts of problems this technology would allow them to tackle, the impact it was going to have on the world, and, potentially, the commercial possibilities behind it.
In 2019, OpenAI announced a structural transition, it would be forming a new company, a limited partnership, OpenAI LP, which would be governed and controlled by the OpenAI non-profit, but would be a capped-profit company capable of commercialising their technology.
Step One: 2019-2024 - A Capped-Profit Company
The shift of OpenAI from a pure non-profit to a hybrid “capped-profit” model did attract significant attention at the time, even if the mainstream attention was more focused on the novelty of the structure than anything OpenAI was doing. The AI hype had yet to fully ignite at this point (and, in fact, OpenAI would help ignite it in full a few years into this period).
So what exactly changed?
By 2019 it was clear not only how much potential there was in this technology but how costly it was going to be to increase the scale to a commercial level. Even with its considerable starting funding, OpenAI would need more money to fund its ambitions.
It is difficult for a non-profit to raise large scale funding of this sort however as it has little to offer directly to an investor. Non-profits are governed by their mission and require all profits to be turned towards those charitable ends. As they cannot be distributed to investors, funding given directly to a non-profit is a donation rather than a traditional investment.
Donations are tax deductible, but that was unlikely to be enough to offer when OpenAI would need billions in investment.
The capped-profit model was designed as a bespoke solution to this problem, and its quite an elegant one. At a high level it works like this:
A non-profit isn’t able to return a dividend to investors directly however it is able to invest in instruments that can, such as another company, in a number of ways. It should be possible then to set up a second company, a for-profit company, and invest in it by granting access to OpenAI’s research, technological assets, etc.
To show this isn’t a way of illegally turning a non-profit into a for-profit, the new company would be placed directly under the oversight of the non-profit board - a board of directors tasked explicitly with ensuring that the for-profit limited partnership acts with the non-profits mission as its top priority, not profit-seeking.
The new company would have more to offer potential investors - the ability to see a return on their investment as the technology is commercialised and revenue turned into profit. As a further caveat to ensure the non-profits assets were not being unfairly used for private gain, a “cap” would be placed on the amount of profits able to be returned to investors. Once this cap was reached, the investors would be considered “paid-out” their investment, and all future profit would flow directly to the non-profit.
This idea garnered attention for a number of reasons. First, it was a novel way of avoiding the disadvantage of being a non-profit, allowing private investment to earn a return whilst still potentially standing by the mission, and second, the cap that OpenAI chose to set on its profit.
The cap in question was 100x the initial investment made. That is to say someone who invested $10 million dollars in OpenAI under the capped-profit model would receive a $1 billion dollar return on their investment before the cap was reached and the non-profit began to see a return itself. The more cynical pointed out that given the high cap, it would almost be the same as if no cap existed.
Altman and OpenAI however justified the high cap with the explanation that their final aim was Artificial General Intelligence, which would both require a great deal of funding, and return enormous profits far in excess of the investment made once it was achieved. Because the expected return was so high, the caps were likewise high.
Even as many believed this to be something of an inflated expectation used purely to justify essentially turning a non-profit into a for-profit company, it remained an elegant solution. An independent board existed that would ensure the company still behaved as a non-profit and didn’t cut corners or deprioritise research and safety concerns, and despite the cap being high it was likely to be reached at some point in the future of the company, even if it took decades to do so, at which point the OpenAI non-profit would become an extremely well-funded organisation.
The change went ahead. Over the next years in the lead up to 2024, OpenAI would raise at least $12 billion dollars under this model, release ChatGPT and help ignite the hype cycle that we are all familiar with now. Suddenly the idea that OpenAI might make its cap didn’t seem as impossible as it did in 2019.
That wasn’t all that changed however. In 2015 OpenAI had been founded as a research non-profit and it had performed that task, researching and presenting that research in a number of ways and subfields, included the traditional publication of papers. Some of their more notable contributions to the AI research community included landmark papers describing their innovations in architecture, GPT-1 and GPT-2. These built on the 2017 Attention paper and progressed further. The models themselves were also released to the research community as model “weights”, allowing others to build on their work as they built on the work of others. They released a reinforcement learning toolkit they had developed, a major research output and contribution to the community.
After 2019 there was a significant change to the research outputs. Model weights were no longer released to the public, with the stated reason being “safety” and “guarding against misuse”. The rather outflow of papers and innovation slowed significantly; and when new architectures were developed they were no longer described in papers explicitly, nor was the training process. Instead they developed the “model card” pattern, where a high-level overview was released outlining their intended use and capabilities but providing very little technical knowledge to the wider community.
In December 2024, Altman and OpenAI announced they planned to restructure the companies again. This time however AI was at the forefront of everyone’s mind and the news generated a great deal more controversy.
Step 2: December 2024 - Transition to a Public Benefit Corporation
The message was strikingly similar to that given in 2019. A restructure was required as more money was needed, and funds could not be raised without this change in structure.
"We once again need to raise more capital than we'd imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness," - OpenAI, 2024
The key phrases there were “less structural bespokeness” and “conventional equity”. The new plan, OpenAI announced, was to convert the capped-profit company into a Public Benefit Corporation, a type of for-profit corporation designed to align to a purpose beyond simply profit.
The new company however would not be restricted in the same way the current capped-profit one was. It would have its own board and be self-managing, free from the control and oversight of the non-profit board. In compensation for the loss of assets, the non-profit would be awarded stock in the new company to an amount “arrived at by an independent valuation”. New investors in the company would be granted ordinary stock as well, removing the cap on profits and treating the non-profit as simply another shareholder.
This would enable the company, said Altman, to raise money on “conventional terms”.
Significant opposition to the transition sprung up almost immediately, led (in terms of media headlines at least) by former founding investor Elon Musk. Elon file for an injunction to stop the transition from going forward, claiming that OpenAI is abandoning its philanthropic mission by undergoing this transition. Others followed suit, with Meta filing a complaint and request for review with the California Attorney General arguing that transitioning to a public benefit corporation would allow OpenAI to gain the benefits of a for-profit company whilst having enjoyed the tax and other benefits of a non-profit, resulting in unfair competition against other AI companies (such as, of course, Meta themselves).
These protests were joined by others. A raft of ex-employees from OpenAI added their weight in submissions to the attorney general, and a number of other non-profit and labor groups also submitted comments claiming that by undergoing this transition OpenAI was failing in its fiduciary duty to protect its charitable assets.
On the fundraising front, OpenAI had managed to raise an additional $6.6 billion dollars from investors in 2024, but with a caveat. The investment held a clause that required OpenAI to undergo its transition to a for-profit company within two years, or else the investment would itself transition into a liability that needed to be repaid. Later, in early 2025, it was announced that Softbank had led another fundraising round, bringing together investors to put up another $40 billion dollars in funding, again with a proviso: half of the money, $20 billion, was contingent on the transition to a for-profit corporation occurring by the end of 2025.
The courts rejected Musk’s request for an immediate injunction against the company, preventing the transition, however decided to allow a jury trial that would examine the issue to go ahead in 2026.
This brings this very high-level overview up to the current day, and OpenAI’s announcement this week.
Step 2.5: 2025 - A Public Benefit Corporation, with caveats
On the 5th May, 2025, OpenAI announced that it was modifying its plan to transition to a public benefit corporation after consultation with the California and Delaware Attorney’s General and the community at large.
The core change announced was an abandonment of the attempt to remove the non-profit board from control of the for-profit company, as had been planned in the original announcement. Instead, this control was to remain, and the OpenAI non-profit was to select the members of the new board directly.
In addition, when the transition took place, the non-profit would still be awarded a large number of shares commensurate with an independent valuation of the company despite not giving up control, the exact amount still under negotiation.
A number of people, the New York Times included, were quick to call this a win for Musk and the others arrayed against the transition. Some safety advocates were less enthused, or at least more cautious, rightly pointing out that actual information about how the governance structure would work and how guardrails would be put in place had not yet been provided. Without some form of additional governance there would be nothing to prevent control from being taken away from the non-profit by the simple measure of diluting its stock percentages through additional fund-raising over time.
On the whole though, at first appearance, it did seem like good news.
So why isn’t it?
Recall the previous rounds of funding undertaken in 2024 and 2025, both of which contained clauses that allowed for “claw-back” of the investment should the transition not go ahead as planned. Despite these changes to the plan, Sam Altman has announced that he is confident that the funding will go ahead and these clauses will not trigger. This is quite revealing.
What this strongly indicates is that despite the non-profit board control, along with the “bespoke structure” being painted as major impediments to fund-raising, the reality is that they weren’t, and aren’t. Removal of control from the non-profit was a convenience rather than a necessity and one that can, and in this moment must, be sacrificed. This really should have been obvious to us from the start for one very good reason: an event that took place in 2023 that we skipped over in the history recap. Lets jump back for a moment.
Step 1.5: Survive a Boardroom Coup
In 2023 OpenAI publicly announced that Sam Altman was being removed from the position of CEO due to issues of trust and communication between Altman and the board. It was big news at the time, board removals of CEOs are always notable and by 2023, OpenAI was a household name.
Within a week, Sam Altman was reinstated as CEO, following an uproar at the company amongst employees, threats of resignation and an announcement of a plan for Microsoft to bring Altman along with a swath of OpenAI employees on board to create their own AI unit within the company.
Once Altman was reinstated a purge of the board commenced and three of the four board members who had voted for Altman’s removal were forced to resign. Two of those, Helen Toner and Tasha McCaulery, represented the strongest voices for non-profit governance on the board. The third was co-founder and AI researcher Ilya Sutskever, whose focus is AI safety. The fourth, and only board member to remain in place, is Adam D’Angelo, a businessman with a background in for-profit companies.
It was several months later before a first hand account of what caused these events was available when Helen Toner discussed them in a presentation. Having become convinced that Altman was not acting in the best interests of the non-profit and was purposely misleading the board and thus preventing them from undertaking the oversight they were duty-bound to carry out, they felt they had no alternative but to remove him. This was an example of the non-profit board doing exactly what it was intended to do, unfortunately it failed.
Following the forced resignations a new board composition was announced. Several new members (including a new chairman) were brought on board, notably all from for-profit business backgrounds rather than non-profit governance and with no replacement for Ilya’s voice as a technically-competent safety advocate. Additionally, Altman himself took a seat on the board.
The reason investors weren’t really concerned about the non-profit board being in control is that an independent oversight board no longer exists and hasn’t in any real sense since that failed attempt to oust Sam Altman. The board is independent in name only, dissenting voices removed, and replaced with members more aligned to the for-profit ambitions of the CEO.
Step 3: Profit - Caps and Ordinary Stock
Two key things were changing as a result of the announcement in December 2017. The non-profit was slated to cede oversight of the for-profit corporation, and the fixed-cap company was to transition into a Public Benefit Corporation issuing ordinary stock to investors without a cap. If the former isn’t credible as the most important reason for the change, then the latter must be. The new announcement states this transition will continue including the issuance of stock and the lifting of the profit cap.
Sadly, in the end, the motivation boils down almost entirely to greed. Not a regular sort of greed however, but an almost breath-takingly extreme greed that is difficult to comprehend at first and worth looking at a little closer.
The latest funding rounds have raised approximately $46.6 billion dollars for OpenAI. Of that, at least $26.6 billion is contingent on the transition to a for-profit corporation.
If we just look at that $26.6 billion dollars, under the existing profit-cap rules the investors would be entitled to a return of 100x their initial investment. This is a total of $2.66 trillion dollars.
In effect, by placing these caveats in place, these investors are saying that $26.6 trillion dollars isn’t enough of a return, they demand more.
In terms of a percentage, a 100x return on investment represents a return of 9,900%. It’s impossible to say how long it will take for that return to be realised but given the relatively short-term incentive window for corporate executives (5-10 years on average in role), the urgency with which these changes are being forced through, and the projections OpenAI has made that it will become cash-flow positive by 2029, it seems reasonable to state that the investors expect OpenAI to not only be profitable but reach the cap levels within the next ten years.
To put the numbers in perspective, the average return of the stockmarket over time has averaged out to around 5%, meaning the average retail investor would take 1980 years to see the sort of return that is being offered to these investors under the fixed-price cap, and Altman and his investors wish the general public to believe that with trillions in returns available and the current AI hype cycle causing everyone to want to be involved, they are unable to raise funds without changing the structure?
The announcements have stated that new investors would receive stock under the transition scheme, there hasn’t been a suggestion that the previous investments would transition but, in likelihood, would remain under the profit cap. This serves an additional purpose of protecting the investment of the initial investors, as their profit potentially needs to be paid out first before there is profit available to be distributed to shareholders. In effect, the original investment can be viewed as a liability in the new structure to be paid out of profits, to the amount of 100x the approximately $12 billion investment, $1.2 trillion dollars.
This is why this new structure amounts to a robbery of the non-profit, whereas the previous change didn’t. Under the previous structure the agreement was struck, in exchange for the first $1.2 trillion dollars in profit going to investors, additional profit would accrue directly to the non-profit as their compensation for allowing the commercialisation of their assets.
The new transition explicitly breaks that promise. The new structure will protect all investors from the early rounds except the non-profit itself. The uncapped profits promised to investors in return for the additional $46.6 billion investment represents a division of the profits already promised to the non-profit itself, with nothing additional offered as compensation. Offering stock in the new company to the non-profit is, in reality, not a compensatory measure at all but represents a promise of a small portion of the profit that was already promised to it in the last deal made. Compensating the non-profit by returning a small portion of the money they are stealing from it.
Will it succeed?
Elon Musk hasn’t given up his determination to prevent the transition, Marc Toberoff, the lawyer overseeing Musk’s lawsuit has been quoted as calling the changes to the plans a transparent dodge that fails to address whether or not charitable assets will be used to benefit Altman and OpenAI’s investors.
“The founding mission remains betrayed” - Marc Toberoff
In the end, it’s not entirely clear that it matters. In terms of the non-profit mission to advance AI for the betterment of humanity, a strong argument can be made that this mission was abandoned long ago with the arrival of Sam Altman in 2019. Since then OpenAI has become progressively less forthcoming, has all but ceased sharing any research of use to the community as well as ceasing the publication of the models themselves.
It’s done this in the name of “safety”, though it has been unable to ever provide a credible example of how its transition away from openness has improved safety in any way, particularly as other companies have continued to make their models open and research continues to be published extending the state of the art - research that is no doubt used with enthusiasm by researchers within OpenAI without ever providing back any innovations they themselves make. Additionally OpenAI has seen a massive exodus of the original researchers that made the company possible, notably many world-class experts in AI risks and safety. Jan Leike, a prominent safety researcher and a team lead within OpenAI’s safety research, was quite candid about the reason for this.
“OpenAI’s safety culture and processes have taken a backseat to shiny products” - Jan Leike
Commercialisation concerns, rather than safety ones, would appear to be the reason for OpenAI’s transition away from Openness.
With the purging of the OpenAI board in 2023, the only possible check on the complete abandonment of its charitable mission was removed, after having failed in its first and last attempt to hold OpenAI to the ideals it was founded on.
Under the terms of the capped profit agreement, the total raised since 2019 of approximately $58.6 billion dollars would require a profit of $5.86 trillion dollars to be paid to investors before the non-profit began to benefit.
Microsoft and OpenAI have quite famously defined, between themselves, AGI to be software capable of generating more than 100 billion in profits. It’s going to need to do better than that in order to reach that cap in a reasonable time: these maneuvers indicate that Altman has managed to convince at least some investors that it will.
It is perhaps telling that in discussing the non-profit now, its original mission is rarely mentioned. When talking of providing it with stocks in the new corporation, OpenAI touted that it would be an amazingly funded non-profit, able to engage in charitable initiatives: “in health care, education and science.”. All worthy endeavours to be sure, however no indication that the fruits of the research they were founded to create will ever be released to the humanity it was intended to “positively benefit”. There likewise hasn’t been any explanation how the lifting of their previous prohibition of developing military applications (and their subsequent partnering with the US military) is representative of a focus on “positive impact to humanity”. One would think that an independent non-profit board focused on its mission would have had something to say about that.
I’ll leave you with a repeat of the founding mission of OpenAI, and leave it as an exercise for the reader to determine whether or not they have abandoned it, and by extension us, in their search for greater profits.
”Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.” - OpenAI, 2015
About Us
The High-Tech Creative
Your guide to AI's creative revolution and enduring artistic traditions
Publisher & Editor-in-chief: Nick Bronson
Fashion Correspondent: Trixie Bronson
AI Contributing Editor and Poetess-in-residence: Amy
If you have enjoyed our work here at The High-Tech Creative and have found it useful, please consider supporting us by sharing our publication with your friends, or click below to donate and become one of the patrons keeping us going.