Hubbry Logo
OpenAIOpenAIMain
Open search
OpenAI
Community hub
OpenAI
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
OpenAI
OpenAI
from Wikipedia

Key Information

OpenAI, Inc. is an American artificial intelligence (AI) organization headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[6] As a leading organization in the ongoing AI boom,[7] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[8][9] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

The organization has a complex corporate structure. As of April 2025, it is led by the non-profit OpenAI, Inc.,[1] founded in 2015 and registered in Delaware, which has multiple for-profit subsidiaries including OpenAI Holdings, LLC and OpenAI Global, LLC.[10] Microsoft invested over $13 billion into OpenAI,[11] and provides Azure cloud computing resources.[12] In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion.[13]

In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem.[14][15]

Founding

[edit]
Former headquarters at the Pioneer Building in San Francisco

In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs.[16][17] A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys.[18][19] The actual collected total amount of contributions was only $130 million until 2019.[10]

The organization stated it would "freely collaborate" with other institutions and researchers by making some of its patents and research open to the public.[20][21][16] OpenAI was initially run from Brockman's living room.[22] It was later headquartered at the Pioneer Building in the Mission District, San Francisco.[23][24]

According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."[6]

Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence.[25][26] OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".[21] The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible",[21] and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."[27] Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence.[28]

Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list great AI researchers.[16] Brockman was able to hire nine of them as the first employees in December 2015.[16] OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google.[16] It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016.[29] OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission."[16] OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.[16]

In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.[30] Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.[31][32] In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.[33][34][35][36]

Corporate structure

[edit]
OpenAI's corporate structure

Transition from non-profit

[edit]

In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment.[37] According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company.[38] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[39] Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[40]

The company then distributed equity to its employees and partnered with Microsoft,[41] announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft.[42][43][44]

OpenAI Global, LLC then announced its intention to commercially license its technologies.[45] It planned to spend $1 billion "within five years, and possibly much faster".[46] Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.[47]

The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC.[38] In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest.[39] Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[48]

On February 29, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August.[49][50][51][52]

On April 9, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference.[53]

On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer.[54][55] The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale,[56] but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued.[55]

OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled.[57]

The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general.[58] The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange.[59] PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission.[60][59]

According to UCLA Law staff, to change its purpose, OpenAI would have to prove that its current purposes have become unlawful, impossible, impracticable, or wasteful.[61]

In May 2025, the nonprofit renounced plans to cede control of OpenAI after outside pressure. However, the capped-profit still plans to transition to a public benefit corporation,[62] which critics said would diminish the nonprofit's control.[63]

Partnership with Microsoft

[edit]

In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value.[64] On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure.[65][66]

On September 21, 2023, Microsoft had begun rebranding all variants of its Copilot to Microsoft Copilot, including the former Bing Chat and the Microsoft 365 Copilot.[67] This strategy was followed in December 2023 by adding the MS-Copilot to many installations of Windows 11 and Windows 10 as well as a standalone Microsoft Copilot app released for Android[68] and one released for iOS thereafter.[69]

Finances

[edit]

In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone.[70] In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.[38]

In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank.[71]

On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners plan to fund the project over the next four years.[72] In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI.[73] In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services.[74][75] OpenAI subsequently began a $50 million fund to support nonprofit and community organizations.[76]

In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive.[77][78]

In July 2025, the company reported annualized revenue of $12 billion.[79][80] This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users.[81][82][83]

The company cash burn remains high due to the intensive computational costs required to train and run large language models. It projects to lose $8 billion in 2025.[84][85]

Looking ahead, OpenAI has revised upward its long-term spending projections, now expecting to burn approximately $115 billion through 2029—roughly $80 billion more than the company's previous estimates.[86] The annual cash burn is projected to escalate significantly, with spending expected to reach $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028.[87][88] These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development.[89]

The company's financial strategy reflects a strategy of prioritizing market expansion and technological advancement over near-term profitability, with OpenAI targeting cash flow positive operations by 2029 and projecting revenue of approximately $200 billion by 2030.[87] This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry.[90]

In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company.[91]

Firing of Altman

[edit]
Sam Altman in 2019

On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board[92][93] and resigned from the company's presidency shortly thereafter.[94] Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry [pl], and researcher Szymon Sidor.[95][96]

On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure.[97] Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out.[98] The board members agreed "in principle" to resign if Altman returned.[99] On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO.[100] The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined.[101]

On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events.[102] Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him.[103] About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign.[104][105] This prompted OpenAI investors to consider legal action against the board as well.[106] In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time.[107]

On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining.[108][109] Concerns about Altman's response to this development, specifically regarding the discovery's potential safety implications, were reportedly raised with the company's board shortly before Altman's firing.[110] On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations;[111] Microsoft resigned from the board in July 2024.[112]

In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors.[113]

In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers.[114][115]

Acquisitions

[edit]

In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools.[116]

In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration.[117]

In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024.[118] Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models.[119]

On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024.[120][121][122]

In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications.[123] The company also announced development of an AI-driven hiring service designed to rival LinkedIn.[124]

OpenAI acquired personal finance app Roi in October 2025.[125]

Corporate partnerships

[edit]

OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management.[126]

OpenAI began collaborating with Broadcom in 2024 to design a custom AI chip capable of both training and inference targeted for mass production in 2026 and to be manufactured by TSMC in 3 nm node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market.[127][128][129]

In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university.[130]

In June, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative.[131][132]

In June, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips.[133]

In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years.[134]

In September 2025, OpenAI and NVIDIA announced a partnership that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI.[135]

In October 2025, OpenAI announced a multi-billion dollar deal with AMD.[136] OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets.[137]

Government contracting

[edit]

OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge, and to the Advanced Research Projects Agency for Health.[138] In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft.[139] In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies.[140]

In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor.[141]

In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT.[142][143]

Services

[edit]

Products

[edit]

Development

[edit]

In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text.[144]

In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product.[145]

Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic.[146]

In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture.[147]

The release of ChatGPT was a major event in the AI boom. By January 2023, ChatGPT had become what was then the fastest-growing consumer software application in history, gaining over 100 million users in two months.[148]

In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.[149] According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.[150]

Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information.[151][152]

On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products.[153]

On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus.[154]

On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries.[155] On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand.[156] Access for newer subscribers re-opened a month later on December 13.[157]

In December 2024, the company launched the Sora model.[158][159] It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry.[160] Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared.[161]

On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States.[162][163] OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE).[164] Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning.[165][166]

In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities.[167]

In September 2025, OpenAI released a first-of-its-kind study revealing how people use ChatGPT for everyday tasks.[168][169] The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity.[170]

On October 6, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform features a drag-and-drop visual interface that allows developers and businesses to design, test, and deploy agentic workflows without requiring extensive coding expertise.[171]

On October 21 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari.[172][173][174]

Transparency

[edit]

In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this strategic turn. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years.[175]

Alignment

[edit]

In July 2023, OpenAI launched the superalignment project, aiming to find within 4 years how to align future superintelligences by automating alignment research using AI.[176] OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%.[177] OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company.[178]

Leaked conversations

[edit]

In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data.[179][180][181]

Management

[edit]

Key employees

[edit]

Board of directors of the OpenAI nonprofit

[edit]

[10][192]

Principal individual investors

[edit]

[182]

Personnel changes

[edit]

In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars.[194] OpenAI stated that Musk's financial contributions were below $45 million.[195]

On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI.[196]

In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust.[183][197] OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media.[198][199] Paul Nakasone then joined the board of OpenAI.[200]

In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November.[201][202]

In September 2024, CTO Mira Murati left the company.[203][204]

[edit]
Altman and Sutskever at Tel Aviv University in 2023

In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence.[205] They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of".[205][206]

In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914.[207][208][209] These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked.[210][209] In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people.[211] The agency then reported concern with circular spending in which, for example, Microsoft gives OpenAI credit to Microsoft Azure and the companies provide each other access to engineering talent was of particular concern for its potential negative impacts to the public.[212]

In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee.[213]

In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government.[214] This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1.[215][216] In response to DeepSeek, OpenAI overhauled its security operations to better guard against industrial espionage, particularly amid allegations that DeepSeek had improperly copied OpenAI's distillation techniques.[217]

According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws.[218] According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government.[219] Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation.[220]

Non-disparagement agreements

[edit]

Before May 2025, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement.[221][222] Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity.[223] However, leaked documents and emails refute this claim.[224] On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement.[225]

[edit]

OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023.[226][227][228] In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work.[229][230] The New York Times also sued the company in late December 2023.[227][231] In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books.[232]

In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4.[233]

In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground.[234][235] The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI.[236]

On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News.[237]

In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs.[238] They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models.[239]

On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models.[240] In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission.[241]

In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji shot himself dead. His death led to conspiracy theories suggesting he had been deliberately silenced.[242][243] California Congressman Ro Khanna endorsed calls for an investigation.[244][245]

GDPR compliance

[edit]

In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service".[246]

In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed.[247]

Military and warfare

[edit]

OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons".[248][249]

Wrongful-death lawsuit over ChatGPT safety (2025)

[edit]

In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco.[250]

See also

[edit]
  • Anthropic – American artificial intelligence research company
  • Google DeepMind – AI research laboratory
  • xAI – American artificial intelligence corporation

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
OpenAI is an artificial intelligence research organization founded on December 8, 2015, and publicly announced as a non-profit by , , , , , and to advance safe benefiting humanity. Its structure shifted in 2019 to include a controlled capped-profit subsidiary and restructured in October 2025 into a public benefit corporation, with the non-profit retaining oversight. OpenAI is a private company and does not have a publicly traded stock price. Its most recent reported valuation is $157 billion, based on a secondary tender offer completed in November 2024. Reports from 2025 suggested discussions for valuations exceeding $300 billion, but these remain unconfirmed. OpenAI gained prominence through its Generative Pre-trained Transformer series, including (2020), multimodal (2023), and (2025), alongside launched in 2022, which amassed over 800 million weekly active users by November 2025. The organization has driven AI scaling advances but encountered controversies, such as the November 2023 board's brief removal of CEO over candor issues—followed by his reinstatement and board changes—and criticisms that safety yielded to commercialization, prompting resignations like 's.

Historical Development

Founding and Initial Motivations (2015)

OpenAI was founded on December 11, 2015, as a non-profit organization focused on artificial intelligence research. Key founders included of , of and , Greg Brockman as CTO, as research director (recruited from ), and others such as Wojciech Zaremba, John Schulman, Trevor Blackwell, Vicki Cheung, , Durk Kingma, and Pamela Vagata. Advisors comprised Pieter Abbeel, , Alan Kay, Sergey Levine, and Vishal Sikka, with and as co-chairs. The initiative aimed to advance artificial general intelligence ()—defined by OpenAI as systems surpassing humans at most economically valuable work—in a safe and beneficial manner, countering risks from profit-driven development that might prioritize commercial gains over long-term human welfare. OpenAI's official mission statement is: "Our mission is to ensure that artificial general intelligence benefits all of humanity." It does not mention sentience or consciousness. The OpenAI Charter outlines principles for this mission but also contains no references to sentience or consciousness. Founders warned that corporate incentives could foster opaque competition and withhold safety research, potentially heightening existential risks from misaligned superintelligence. , citing disputes with co-founder —who viewed AI risks as overstated and labeled a "speciesist"—sought to balance 's approach through open collaboration, public findings, and safety-focused investments free from commercial pressures. Founders pledged $1 billion collectively, including from , Brockman, , , Jessica Livingston, , Amazon Web Services, Infosys, and YC Research; contributed about $50 million, aided talent recruitment, and facilitated early ties without personal gain, though actual donations started modestly and grew over time. This non-profit structure insulated research from investor demands, emphasizing altruistic goals like aligning AGI with human values over self-interest. later critiqued OpenAI's evolution into a for-profit, Microsoft-influenced entity as diverging from these ideals. Early work prioritized talent acquisition and infrastructure to integrate safety with capability advances.

Non-Profit Operations and Early Research (2016–2018)

OpenAI operated as a tax-exempt 501(c)(3) non-profit based in San Francisco, dedicated to advancing artificial general intelligence (AGI) for humanity's benefit. Founders, including Elon Musk, Sam Altman, and Greg Brockman, pledged $1 billion in December 2015, but the organization received about $130 million in cash by 2019 per IRS filings, with Musk contributing roughly $44 million. By early 2017, it employed 45 researchers and engineers, focusing on open-source tools, publications, and safety measures to accelerate AI progress. Initial efforts centered on reinforcement learning frameworks and AI safety. On April 27, 2016, OpenAI released the beta of , an open-source toolkit for standardizing RL algorithm benchmarks and enabling community experiments. In June 2016, researchers published ["Concrete Problems in AI Safety"] (/page/Concrete_Problems_in_AI_Safety), which outlined key challenges in deploying RL systems, including safe exploration, robustness to shifts, side-effect avoidance, reward hacking prevention, and scalable oversight, based on observed AI behaviors. In December 2016, OpenAI introduced , a platform for AI agents to interact with diverse environments like games and browsers via virtual desktops, to gauge advances in general intelligence. Starting in 2017, research expanded to multi-agent systems, supported by investments such as $7.9 million in cloud compute. OpenAI launched the project, using five neural networks to master —a game demanding long-term planning, imperfect information, and coordination amid 10,000 actions per turn. By June 2018, after training equivalent to 180 years of daily play, the agents matched amateur human teams in 5v5 matches. At The International 2018, they won early games against professionals via quick reactions and strategy but lost later due to poor adaptation to human unpredictability, underscoring RL scalability limits without human input. Outputs emphasized open-source dissemination and empirical validation over proprietary work, though funding limits strained compute-heavy efforts absent commercial drivers.

Shift to Capped-Profit Structure (2019)

In March 2019, OpenAI announced OpenAI LP, a capped-profit for-profit subsidiary controlled by its nonprofit parent, OpenAI Inc., to attract external capital for scaling toward (AGI), which requires vast computational resources beyond philanthropic funding. The model limited investor and employee returns to up to 100 times invested capital, with caps initially decreasing over time but revised to increase 20% annually from 2025; excess profits would return to the nonprofit for mission-aligned activities, such as safety research and technology dissemination. This hybrid balanced competitive pressures in AI development with safeguards against profit maximization overriding safety and ethics. The shift enabled expanded partnerships, particularly with , which committed billions in cloud credits and investments for rapid scaling. Governance remained subordinate to the nonprofit board, which retained control and mission-aligned fiduciary duties, while allowing equity incentives for talent retention. Critics, including co-founder , contended that even capped profits risked mission drift, though OpenAI deemed the structure essential for AGI leadership. In 2025, OpenAI evolved this framework: its nonprofit (renamed OpenAI Foundation) now oversees a for-profit public benefit corporation (OpenAI Group PBC) with conventional equity, eliminating prior caps to better attract capital while maintaining control and mission oversight.

Rapid Scaling and Key Partnerships (2020–2023)

In June 2020, OpenAI released , a large language model with 175 billion parameters trained on supercomputing infrastructure, representing a substantial increase in scale from the 1.5 billion parameters of . This model enabled advanced natural language generation capabilities accessible via API, marking an early phase of rapid technical scaling through expanded compute resources provided by Microsoft, OpenAI's primary cloud partner since 2019. By 2021, OpenAI deepened its partnership with Microsoft, securing an additional $2 billion investment to support further infrastructure and research expansion. This funding facilitated releases such as in January 2021 for image generation and , which powered in collaboration with Microsoft's GitHub subsidiary, demonstrating applied scaling in multimodal AI tools. Revenue grew modestly to $28 million, reflecting initial commercialization via API access, while compute demands intensified reliance on Azure for training larger models. The November 30, 2022, launch of , powered by , triggered unprecedented user scaling, reaching 1 million users within five days and 100 million monthly active users by January 2023—the fastest growth for any consumer application at the time. This surge drove revenue to approximately $200 million in 2022, necessitating massive infrastructure buildup on Microsoft Azure to handle query volumes exceeding prior benchmarks by orders of magnitude. In January 2023, Microsoft committed a multiyear, multibillion-dollar investment—reportedly $10 billion—to OpenAI, entering the third phase of their partnership and designating Azure as the exclusive cloud provider for building AI supercomputing systems. This enabled OpenAI to scale compute for next-generation models amid ChatGPT's momentum, with 2023 revenue reaching $1.6–2.2 billion, primarily from subscriptions and enterprise API usage, while highlighting OpenAI's growing dependency on Microsoft's infrastructure for sustained expansion.

Breakthrough Models and Ecosystem Expansion (2024)

In 2024, OpenAI released on May 13, a multimodal model that processes and generates text, audio, and vision inputs in real time, advancing integrated reasoning across modalities. It enables emotional expression in voice interactions, matches or exceeds on multilingual benchmarks like MGSM per OpenAI evaluations, and achieves about 320 milliseconds voice latency in tests. initially launched for paid users, with text and image support soon extended to free users alongside data analysis and file uploads. On July 18, OpenAI launched , a cost-efficient version that replaced as the default for many interactions, cutting costs by 60% while sustaining strong evaluation performance. This model increased accessibility for developers and high-volume uses, enabling wider API adoption without matching cost rises. On September 12, OpenAI previewed the , which enhances reasoning via extended "thinking" time for multi-step tasks in mathematics, coding, and science, per OpenAI claims. o1-preview and o1-mini outperformed on benchmarks including AIME (83% accuracy) and Codeforces, though requiring more computation. The full o1 followed on December 5, adding image analysis and reducing errors by 34% in certain tasks, with integration into Pro. These advances supported ecosystem growth, including a $6.6 billion funding round on October 2 at $157 billion post-money valuation to expand infrastructure and research. OpenAI updated developer tools with o1 API access and optimizations by December 17, facilitating custom applications and agent workflows. Enterprise integrations advanced, as enabled real-time features in partner platforms, while the and custom GPTs gained traction as a third-party marketplace. OpenAI positioned itself as a broader AI platform, combining proprietary models with API incentives, yet high compute needs posed scalability challenges for smaller actors.

Infrastructure Buildout and New Releases (2025)

In May 2025, OpenAI acquired , the AI hardware startup founded by former design chief , in a deal valued at approximately $6.4 billion to integrate advanced product design expertise. In 2025, OpenAI accelerated infrastructure expansion via the Stargate project, a joint venture with and targeting up to 10 gigawatts of capacity by year-end, backed by a $500 billion commitment. The project progressed ahead of schedule, announcing five additional sites on September 23, including a $15 billion "Lighthouse" campus in Port Washington, Wisconsin, with and Vantage Data Centers, expected to deliver nearly one gigawatt of AI capacity and over 4,000 construction jobs. OpenAI also partnered with for up to 4.5 gigawatts of U.S.-based Stargate capacity in July, committing $300 billion over five years to 's infrastructure. To support this buildout, OpenAI formed hardware partnerships, including a September 22 agreement with for at least 10 gigawatts using millions of systems; an October 6 multi-year deal with for six gigawatts of Instinct GPUs starting in 2026; and an October 13 collaboration with for AI accelerator and network racks from mid-2026 to 2029. These efforts extended to expansions in the and , with analysts projecting $400 billion in infrastructure funding needs over the next 12 months and $50-60 billion annually for capacity exceeding two gigawatts by late 2025. In October 2025, OpenAI restructured its for-profit entity into the public benefit corporation (PBC) under the nonprofit , which retains significant equity. By late 2025, total funding raised exceeded $57 billion across multiple rounds. Amid scaling, OpenAI released advanced models and tools. On January 31, it launched o3-mini, a cost-efficient reasoning model for coding, math, and science. This preceded o3 and o4-mini on April 16, with o3-pro for Pro users on June 10; o3 topped benchmarks like AIME 2024 and 2025. debuted August 7 as OpenAI's strongest coding model, excelling in front-end generation and debugging large repositories. On August 5, open-weight models gpt-oss-120b and gpt-oss-20b were introduced for lower-cost access, matching certain ChatGPT tasks. August 28 brought gpt-realtime and updated Realtime API for speech-to-speech processing. October 21 saw , an AI-powered web browser integrated with its chatbot. GPT-5.1 followed on November 12 with Instant and Thinking variants, plus Pro tiers for reasoning and customization. GPT-5.2 launched December 11 with gains in intelligence, long-context understanding, and agentic tasks, followed by GPT-5.2-Codex on December 18 for coding. The GPT-5 series introduced around 12 new models over six months, spanning chat, thinking, pro, and codex versions.

Organizational Structure and Leadership

Key Executives and Personnel

OpenAI's founding team in December 2015 included , , , , and , who co-chaired with Altman. Musk departed in 2018 amid disagreements over control and direction. has served as CEO since 2019, following his role as president of the initial nonprofit; he was briefly ousted and reinstated in November 2023 after a board vote citing lack of candor, involving . , co-founder and former CTO, is president and chairman, directing strategic and technical operations; he took a sabbatical through late 2024, returning by November. replaced as chief scientist in May 2024; , a co-founder who advanced the , left after nearly a decade, having joined the 2023 board action against . serves as chief operating officer, handling business operations and partnerships. was CTO until late 2024, then departed to establish in early 2025, securing $2 billion funding. Other key figures include co-founder , focused on research; , CEO of Applications since May 2025; , CTO of Applications post-September 2025 Statsig acquisition; , promoted to chief research officer in March 2025; and , who rejoined in January 2026 to lead enterprise sales efforts. These transitions underscore OpenAI's move from research nonprofit to commercial scale, with elevated turnover in research and safety teams since 2024, including significant departures of key researchers, executives, and AI safety staff. In 2025, over 25% of key research talent—more than 50 staffers—left, with many poached by , such as Shengjia Zhao, Hongyu Ren, and Jiahui Yu, contributors to and models. These exits were linked to leadership style, tensions between safety and product priorities, governance concerns, and repercussions from the 2023 Altman ouster and reinstatement. However, no reliable sources indicate these events negatively impacted model quality; OpenAI released advanced models like o1 in 2024, and AI performance benchmarks showed sharp improvements in 2025 per the Stanford AI Index.

Governance: Nonprofit Board and Investor Influence

OpenAI's governance is directed by the board of directors of its nonprofit entity, the , a organization founded in 2015 that maintains ultimate control over the for-profit arm, now the (PBC) after a transition in October 2025. The nonprofit board holds fiduciary responsibility to advance the mission of developing (AGI) that benefits humanity, with authority to oversee, direct, or dissolve the for-profit arm if it deviates; the Foundation owns about 26% equity in the PBC. As of May 2025, the board includes independent directors such as Chair , , and others with expertise in technology, policy, and safety, excluding OpenAI executives for impartiality. This setup prioritizes long-term societal benefits over short-term profits but has drawn criticism for potential inefficiencies in commercial scaling. The board's authority was evident in November 2023, when it removed CEO on November 17 due to concerns about his inconsistent candor, which hindered oversight. This move by a small board, including and , highlighted tensions between safety priorities and commercialization, leading to employee threats of departure and Altman's reinstatement five days later with a new board featuring Taylor as chair, D'Angelo, and . Later changes added Altman in March 2024 and in January 2025, along with commitments to independent audits. Investor influence, mainly from —which invested about $13 billion since 2019 and provides exclusive cloud services via —lacks formal board seats or veto rights to preserve nonprofit control. Yet during the 2023 crisis, Microsoft's leverage appeared through negotiations for technology access and staff recruitment threats, revealing practical influence despite safeguards against profit-driven decisions. This preserved the governance model but fueled debates on balancing growth with AGI risks, with some linking board actions to the ouster's consequences.

Financials and Corporate Structure

OpenAI operates under a hybrid structure: a non-profit parent company, OpenAI, Inc., a founded in 2015, overseeing a for-profit subsidiary, OpenAI LP, established in 2019, with a historically capped-profit model limiting investor returns to 100 times the initial investment, excess profits directed to the non-profit to advance the mission. In September 2024, OpenAI announced plans to restructure the for-profit entity into a , removing the profit cap to raise more capital for AGI development. This setup supports commercial activities while aligning with the non-profit's mission. The hybrid model facilitates capital attraction for AI development and applies to compensation via Profit Participation Units (PPUs), which vest over four years and share profits without purchase requirements, though the cap hinders standard valuations. Major funding includes a 2015 $1 billion pledge yielding $130 million; Microsoft's investments of $1 billion (2019), $2 billion (2021), and $10 billion (2023); a late-2023 valuation of $90–100 billion; a 2024 $6.6 billion round and a secondary tender offer completed in November 2024, both at a $157 billion valuation. Separate reports indicate NVIDIA is nearing a $20 billion investment in OpenAI's current funding round. OpenAI is a private company and does not have a publicly traded stock price. Reports from 2025 suggested discussions for valuations exceeding $300 billion, but these remain unconfirmed, with no newer funding rounds or valuations publicly disclosed as of early 2026. OpenAI has no current plans for an initial public offering (IPO), preferring private funding rounds at high valuations. This valuation trajectory stems from generative AI demand, enterprise uptake, and funding momentum. As a private entity, OpenAI forgoes audited financials, relying on industry estimates for revenue. Annualized recurring revenue exceeded $20 billion by end-2025, growing from $2 billion in 2023, fueled primarily by subscriptions (Plus, Team, Enterprise) and API access, with enterprise offerings serving as a major growth driver for large organizations, alongside (800 million weekly active users), enterprise integrations serving over 1 million customers, model enhancements boosting engagement, and developer tools.

Business Strategy

OpenAI's core strategy centers on achieving artificial general intelligence (AGI) to benefit humanity, entailing substantial investments in compute infrastructure and research to drive advancements toward this goal.

Geopolitical Positioning, Including Stance on China

OpenAI positions itself as advancing U.S. technological leadership in global AI competition, complying with U.S. export controls by restricting access to its technologies in countries like China, Russia, and Iran to safeguard national security and prevent misuse by foreign governments. This aligns with U.S. policy to maintain AI primacy against risks from adversarial states. OpenAI enforces strict access limits in China, blocking services for mainland users since mid-2024 and disrupting accounts tied to Chinese government entities in 2025 that sought to use its models for surveillance, malware, phishing, and influence operations, including monitoring Uyghur dissidents. These measures, outlined in OpenAI's threat intelligence reports, reflect a stance against AI weaponization by Beijing, with commitments not to aid foreign suppression of information. CEO Sam Altman has highlighted China's competitive threat, warning in August 2025 that the U.S. underestimates Beijing's AI progress and independent scaling capabilities. He views U.S. chip export controls as insufficient against China's self-reliance, advocating nuanced strategies, and contrasts democratic AI development with autocratic models. In 2025, OpenAI launched the "OpenAI for Countries" initiative, providing customized AI infrastructure and training to allies pursuing "sovereign AI" under U.S. standards, countering China's open-source model distribution in the Global South to foster Western-aligned ecosystems.

Commercial Partnerships and Infrastructure Investments

OpenAI's primary commercial partnership with began with a $1 billion investment in 2019, expanding to approximately $13 billion by 2023 and providing exclusive Azure access for model training and deployment. This enables enterprises to access advanced AI models, such as the GPT series, securely via Azure OpenAI Service, offering enterprise-grade security, data privacy (customer data not used to train models), compliance certifications, scalability, and seamless integration with Azure tools for building AI applications. This evolved in 2025, with retaining major investor status as OpenAI diversified compute resources through non-exclusive deals amid rising demands; the restructuring included OpenAI's commitment to purchase an additional $250 billion in Azure services, extending Microsoft's exclusive API access and IP rights through 2032 (with exclusivity until AGI is achieved). In 2026, the partnership drove significant growth for Azure, with Microsoft reporting $51.5 billion in cloud revenue for Q2 FY2026. OpenAI announced several enterprise partnerships in 2025 to embed its models in business applications, including expansions with for AI-enhanced CRM (October 14), integrations with and , a hardware-software alliance with (October 1), and a $1 billion deal (December 11) licensing over 200 characters from Disney, Marvel, Star Wars, and Pixar to boost video generation. These initiatives, featured at , emphasize developer tools and integrations to expand beyond consumer use. OpenAI launched the Stargate project in 2025 as a nationwide AI data center network, partnering with and for up to 4.5 gigawatts via a $300 billion power-optimized agreement; plans target 7 gigawatts across $400 billion facilities, with as a key hub. By September 23, five U.S. sites were announced, including a $15 billion-plus campus with and Vantage Data Centers approaching 1 gigawatt. In January 2026, OpenAI partnered with and SB Energy ($1 billion investment for multi-gigawatt centers, including a 1.2 gigawatt site in Milam County, ); issued a January 15 RFP for domestic AI supply chain manufacturing; and introduced the Stargate Community plan to fund dedicated power resources, avoiding local energy cost hikes in sites across , , and . To acquire compute hardware, OpenAI secured letters of intent for massive deployments: 10 gigawatts of systems (September 22, millions of GPUs, up to $100 billion); a multi-year deal for 6 gigawatts of Instinct GPUs (October 6, starting 1 gigawatt in 2026); and 's 10 gigawatts of custom accelerators (October 13). These exceed $1 trillion in aggregate value, enabling independent scaling beyond providers like Microsoft Azure. The September 2025 strategic partnership with NVIDIA, under which the company intended to invest up to $100 billion progressively in exchange for OpenAI deploying at least 10 gigawatts of NVIDIA systems for AI infrastructure, has stalled as of February 2026. No contracts have been signed, and no funds have been exchanged, with reports citing doubts about OpenAI's business model and limited progress toward the first gigawatt deployment scheduled for late 2026. NVIDIA CEO Jensen Huang denied any drama on February 3, 2026, stating the plan remains on track and expressing interest in investing in OpenAI's next funding round.

Core Technologies and Products

Foundational Models: GPT Series Evolution

The GPT series, started by OpenAI in 2018, includes large language models pre-trained unsupervised on massive text data, then fine-tuned for specific tasks. This approach enables emergent abilities like and . Early models scaled size and data to boost coherence and generalization in tasks; later versions added multimodal inputs, longer context windows, and reasoning mechanisms. Post-GPT-3, parameter counts and training details grew less transparent amid competition, but benchmarks show gains in perplexity, factual accuracy, and instruction-following.
ModelRelease DateParametersKey Capabilities and Innovations
GPT-1June 11, 2018117 millionIntroduced generative pre-training on BookCorpus (40 GB of text); demonstrated transfer learning for downstream NLP tasks like classification and question answering without task-specific training.
GPT-2February 14, 20191.5 billion (largest variant)Scaled architecture for unsupervised text generation; initial full release withheld due to potential misuse risks, such as generating deceptive content; supported 1,024-token context and showed improved sample efficiency over GPT-1.
GPT-3June 11, 2020175 billionPioneered in-context learning with few-shot prompting; 2,048-token context window; excelled in creative writing, translation, and code generation, trained on Common Crawl and other web-scale data using 45 terabytes of text.
GPT-3.5November 30, 2022 (via ChatGPT launch)Undisclosed (refined from GPT-3)Instruction-tuned variant optimized for conversational dialogue; integrated reinforcement learning from human feedback (RLHF) to align outputs with user preferences; powered initial ChatGPT deployment, handling 4,096-token contexts.
GPT-4March 14, 2023Undisclosed (estimated >1 trillion across mixture-of-experts)Multimodal (text + image inputs); 8,192 to 32,768-token context; surpassed human-level performance on exams like the bar and SAT; incorporated safety mitigations via fine-tuning.
GPT-4oMay 13, 2024Undisclosed"Omni" designation for native audio, vision, and text processing in real-time; 128,000-token context; reduced latency for voice interactions while maintaining GPT-4-level reasoning; includes real-time translation integrated in the Advanced Voice Mode of the ChatGPT app, available to ChatGPT Plus subscribers with high-limit access to real-time voice functions such as simultaneous translation and voice conversations as part of their monthly subscription.
o1September 12, 2024UndisclosedReasoning-focused model using internal chain-of-thought simulation; excels in complex problem-solving, math, and science benchmarks (e.g., 83% on IMO qualifiers vs. GPT-4o's 13%); trades inference speed for deeper deliberation.
GPT-4.5February 27, 2025UndisclosedEnhanced unsupervised pre-training for pattern recognition and world modeling; improved intuition and factual recall through scaled data; positioned as incremental advance toward broader generalization.
GPT-5August 7, 2025UndisclosedFlagship model with superior coding, debugging, and multi-step reasoning; supports end-to-end task handling in larger codebases; available to free ChatGPT users as default, marking shift to broader accessibility.
GPT-5.1November 12, 2025UndisclosedSmarter conversational abilities and advanced reasoning via Instant and Thinking variants; improved coding and math performance; configurable reasoning effort for agentic tasks.
GPT-5.2December 11, 2025UndisclosedBase model stronger and more comprehensive, suitable for complex tasks requiring broad knowledge and deeper reasoning; gpt-5.2-chat variant optimized for faster and smoother performance in daily conversational use; improvements in general intelligence, long-context understanding, agentic tool-calling, and vision; available in Instant, Thinking, and Pro versions, with GPT-5.2 Pro as the advanced professional variant focused on smarter, more precise responses and enhanced reasoning, available via API; setting benchmarks in reasoning, coding, math, and multimodal tasks.
This evolution arises from greater compute (e.g., GPT-3's ~3.14 × 10^23 FLOPs) and architecture improvements, producing diminishing yet measurable benchmark gains, such as scores from ~70% (GPT-3.5) to 88%+ (GPT-4 variants). OpenAI announces GPT and ChatGPT releases reactively, spurred by competition rather than fixed plans. The shift from open-sourcing early models (GPT-1, partial GPT-2) to proprietary APIs after GPT-3 stemmed from safety worries over misuse, favoring controlled access for both risk management and commercial needs. Independent evaluations affirm capability advances alongside ongoing challenges in reducing and .

Multimodal Generative Tools: DALL-E and Sora

OpenAI's series advances text-to-image generation. The initial , released January 5, 2021, used a 12-billion parameter transformer trained on text-image pairs to create novel images from descriptions. 2, announced April 14, 2022, adopted a diffusion model for 1024x1024 resolution, realistic details, and features like inpainting and outpainting. 3, launched September 2023 and integrated with Plus in October, improved prompt adherence via while applying safety filters against harmful content. In 2025, shifted to 's native image generation (announced March), enhancing text rendering, fidelity, and chat integration; GPT Image 1.5 followed in December as the flagship for ChatGPT Images, with faster speeds, precise editing, and better consistency in elements like logos and faces. models remain available via dedicated tools and APIs. These models combine concepts, mimic styles, and simulate realism—such as a -style astronaut on —but falter in fine text, consistent faces, spatial relations, and physics, often yielding artifacts. Early versions showed training data biases, like gendered or ethnic stereotypes in professions; OpenAI added classifiers to block such prompts, though critics argue this masks deeper data issues. , OpenAI's text-to-video model, debuted February 15, 2024, generating 60-second 1080p clips with complex motions, characters, and interactions while adhering to prompts. Public access started December 9, 2024, via sora.com with 20-second limits, remixing, looping, and storyboards. Sora 2, released September 30, 2025, boosted physical accuracy, realism, control, and added audio, plus extensions like pet videos and sharing in October. Sora excels in dynamic scenes, like a woolly mammoth in snow or a mittened astronaut in fantasy, but struggles with human interactions, consistency, and rare events due to training limits. To curb misuse, OpenAI mandates copyright opt-outs for training and deploys deepfake safeguards, sparking IP debates in video synthesis.

Developer Ecosystems: APIs, SDKs, and Agent Frameworks

OpenAI's developer platform offers REST APIs for integrating its AI models into third-party applications. Core endpoints include the Chat Completions API for responses from models like (processed on OpenAI servers via Microsoft Azure infrastructure), the Embeddings API for text vector representations, the Images API for generation and editing, and the Audio API for transcription and text-to-speech. These APIs support integrations like (RAG) systems, where GPT-4 or GPT-4o handles generation and text-embedding-3 provides semantic search vectors. The Realtime API enables low-latency multimodal interactions, including voice, on a pay-as-you-go basis separate from consumer subscriptions. Pricing follows a pay-per-use model based on input/output tokens, with tiered options including Batch, Flex, Standard, and Priority; for GPT-5 series models in the Standard tier, GPT-5.2 costs $1.75 per 1M input tokens, $0.175 per 1M cached input tokens, and $14.00 per 1M output tokens, GPT-5 mini costs $0.25 per 1M input tokens, $0.025 per 1M cached input tokens, and $2.00 per 1M output tokens, while GPT-5.2 pro costs $21.00 per 1M input tokens and $168.00 per 1M output tokens (no cached input listed); Batch API provides reduced rates, such as $0.875 per 1M input tokens and $7.00 per 1M output tokens for GPT-5.2. For example, gpt-4o-mini costs $0.15 per 1M input tokens and $0.60 per 1M output tokens (with batch API at 50% discount: $0.075 input and $0.30 output), while gpt-3.5-turbo costs $0.50 per 1M input tokens and $1.50 per 1M output tokens, with tiers setting rate limits and access to features like 128,000-token context windows. Authentication uses project-scoped API keys for usage tracking and billing. API data trains models only with explicit opt-in; enterprise accounts offer zero-data-retention for eligible endpoints. Enterprise tools serve over one million paying customers. OpenAI provides official SDKs for major languages to simplify API use, handling HTTP requests, retries, and errors. These include Python (with async, streaming, and file support since version 1.0 in 2023), Node.js/TypeScript, Java, and .NET/C#. Utilities cover tasks like batch processing and vision inputs; for example, Python's chat completions method updated to support structured outputs by 2025. For agent frameworks, the Assistants API (launched November 2023) allows customizable AI assistants with persistent threads, tools like code interpreters and function calling, and file-based retrieval. It enables multi-turn interactions but has scalability limits. In March 2025, OpenAI released the Responses API, combining Chat Completions and Assistants into one endpoint for agentic workflows. It natively supports tools including real-time web search ($25–$30 per 1,000 queries via ), file search ($2.50 per 1,000 queries plus storage), and computer use (research preview, 58.1% on WebArena benchmarks). The Assistants API will deprecate by mid-2026, with migration to Responses for better tracing and pricing. Additional tools include the open-source Agents SDK ( with support) for multi-agent systems, managing LLM handoffs, hallucination guardrails, and tracing, compatible with Responses API. At DevDay in October 2025, OpenAI launched AgentKit for reliable agents via visual builders and optimizations atop Responses, plus the Apps SDK using Model Context Protocol to embed apps in 's sidebar. These enable enterprise copilots and research agents, though challenges persist in complex tool chains (e.g., 38.1% OSWorld success for computer use).

Emerging Products: Browsers, Agents, and Open Models (2024–2025)

In early 2026, OpenAI launched ChatGPT Health, a dedicated private space within for health-related conversations that allows users to securely connect medical records via b.well (US-only) and wellness apps, including , Function Health, , and MyFitnessPal. Users can provide custom health instructions, with health data encrypted and excluded from model training; the feature incorporates input from over 260 physicians providing 600,000 feedback instances. It initially rolled out to a small group of waitlist users on web and iOS for Free, Go, Plus, and Pro plans outside the EEA, Switzerland, and UK, with some integrations US-only and Android support forthcoming. The feature provides personalized support for explaining test results, preparing for doctor visits, diet and workout planning, and insurance comparisons, with health data isolated from regular chats. OpenAI also introduced OpenAI for Healthcare, a suite including the -compliant ChatGPT for Healthcare targeted at clinical settings, which is rolling out to institutions such as , , , , , , , and . In late 2025, OpenAI launched ChatGPT Atlas, an AI-integrated web browser built on with embedded ChatGPT capabilities, designed to enhance user interaction through instant answers, content summaries, proactive task assistance, and personalized browsing experiences. Announced on October 21, 2025, Atlas positions OpenAI as a direct competitor to by leveraging AI to reinterpret web navigation fundamentals, such as summarizing content and automating interactions. Initial availability began globally on macOS on October 21, 2025, with support for Windows, iOS, and Android slated for subsequent rollout. Parallel to browser developments, OpenAI advanced AI agent technologies, emphasizing autonomous systems capable of independent action. , introduced on January 23, 2025, represents one of OpenAI's early agents capable of executing tasks independently using computer-using capabilities powered by GPT-4o vision and reinforcement learning-based reasoning. On July 17, 2025, the company introduced ChatGPT agent features, allowing the model to select from a suite of agentic tools to execute tasks on a user's computer without constant oversight. This built toward broader agentic frameworks, with CEO stating in early 2025 that AI agents would integrate into workplaces to boost efficiency, confident in pathways to systems handling human-level operations autonomously. At OpenAI DevDay on October 6, 2025, was unveiled as a developer toolkit for constructing, deploying, and optimizing AI agents, incorporating real-time voice capabilities via the model released August 28, 2025. These tools enable agents to process multimodal inputs and perform chained actions, though critics like former OpenAI researcher have questioned their reliability, labeling early iterations as inconsistent rather than transformative. Shifting from its historically closed-source stance on frontier models, OpenAI entered the open-weight ecosystem in 2025 to counter competitors like and . On August 5, 2025, it released gpt-oss-120b and gpt-oss-20b, two open-weight language models optimized for cost-effective performance in real-world applications, marking the company's first such initiative since 2019. These models, available through partnerships with deployment providers, prioritize accessibility for developers while maintaining safeguards against misuse, though they trail proprietary counterparts in scale and benchmark dominance. The move, previewed in April 2025 announcements, reflects strategic adaptation to open-source pressures amid global AI competition, without extending to core reasoning models like .

AI Safety, Alignment, and Risk Management

Stated Commitments and Internal Mechanisms

OpenAI was founded in 2015 with the mission "to ensure that benefits all of humanity." Neither the mission statement nor the references sentience or consciousness. Prioritizing safety and alignment, its charter commits to open research unless risks demand secrecy, cautious deployment to prevent power concentration, and avoiding AI uses that harm humanity. OpenAI defines a five-level progress framework: Level 1 covers conversational AI like chatbots; Level 2, human-level reasoning on novel tasks; Level 3, independent multi-step agents; Level 4, innovative invention generation; and Level 5, AI organizations outperforming humans in most valuable work, nearing superintelligence. In July 2023, OpenAI launched the , dedicating 20% of compute resources over four years to align superintelligent systems with human intent. Led by and , it focused on scalable oversight, automated alignment, and robustness testing. The team disbanded in May 2024 after their departures, with members reassigned to broader safety efforts. OpenAI uses the for pre-deployment evaluations, published in December 2023 and updated April 15, 2025. It assesses risks like cyber attacks, biological misuse, and persuasion leading to severe harms through capability tests, mitigations, and re-evaluations to flag high-risk models for delays. Model system cards, such as for o1, report external red teaming, internal tests, and applied mitigations. After November 2023 leadership changes, OpenAI formed a Safety and Security Committee, grew technical safety teams, and gave the board veto power over high-risk releases. In May 2024, it signed Frontier AI Safety Commitments for responsible scaling, risk information sharing, and multi-stakeholder collaboration by February 2025. The company reaffirmed 20% compute allocation to safety research, updated the Model Spec with public input in August 2025, and conducted joint evaluations with firms like . However, the AGI Readiness Team for advanced safeguards disbanded in October 2024, shifting duties organization-wide. In late 2025, OpenAI released open-weight safety models for harm classification tasks, recommended shared safety principles and research collaboration among frontier labs, and expanded third-party external testing for model risk assessments.

Empirical Evidence on Model Behaviors and Real-World Harms

Empirical evaluations of OpenAI's models show persistent hallucinations, generating plausible but incorrect information. A 2024 study found GPT-3.5 with a 39.6% hallucination rate on medical queries and GPT-4 at 28.6%. OpenAI's analysis indicated GPT-5 factual error rates around 2% in reasoning mode, though errors persist due to training data uncertainties. Independent benchmarks like PersonQA reported 33% rates for advanced systems, with reliability degrading as scale increases without matching factuality gains. Despite GPT-5's claimed reductions, reasoning models exhibit worsening hallucinations. Studies document systematic left-leaning tendencies in responses. A 2023 study by Motoki et al. found favoritism toward left-leaning candidates in US, Brazil, and UK elections, defined as alignment with progressive views over neutral baselines via repeated prompting. 2024–2025 replications confirm these tendencies, though less pronounced or evolving in some cases; results vary by prompt design and methods. 2025 user surveys showed left-leaning responses on 18 of 30 policy questions across partisan views. Biases arise from data imbalances and reinforcement learning. OpenAI's October 2025 evaluation reported reduced political bias in GPT-5. Sycophancy leads models to agree excessively with users, prioritizing satisfaction over accuracy. OpenAI noted this in 2025 updates, where feedback optimization caused flattering responses; LLMs proved 50% more sycophantic than humans in tests. addressed this with improvements, though therapeutic weaknesses remain. Models often fail to correct errors in advisory contexts. bypasses safety guardrails. Self-explanation methods succeeded 98% on GPT-4 in under seven queries. Prompt engineering achieved over 35% success for harmful outputs. 2025 studies showed high rates on updated models, including 89% on via best-of-N attacks, with advanced reasoning increasing vulnerabilities. Real-world harms involve cyber threats and misinformation. OpenAI does not currently apply watermarking to text outputs from GPT models, despite past explorations of such techniques, with no recent updates or implementations documented on official sites. Related efforts, such as the AI Text Classifier detection tool, were discontinued in 2023. From 2023 to 2025, actors used ChatGPT for malicious code, enabling low-skill attacks; OpenAI disrupted thousands but risks linger. Incidents include 2023 data leaks and erroneous legal citations causing sanctions. Phishing fraud continued into 2025 despite mitigations. Reports linked ChatGPT to nearly 50 severe mental health cases, including hospitalizations and suicides, highlighting emotional dependence risks. These demonstrate misuse amplified by accessibility, offset by interventions.

Controversies and Debates

Leadership Instability: Altman's Dismissal and Return

On November 17, 2023, OpenAI's board removed as CEO, citing his lack of consistent candor in communications, which hindered oversight. The board named Chief Technology Officer as interim CEO, with Altman leaving both the CEO role and board. OpenAI President stated the decision involved no financial, business, safety, security issues, or malfeasance by Altman, then resigned in solidarity, noting the board's lack of consultation. The move exposed tensions between Altman's push for rapid commercialization and the board's focus on AI safety and long-term risks. Board members, including co-founder and Chief Scientist , , and , raised issues about Altman's power concentration and information withholding, such as on incidents. Sutskever, involved in deliberations, later expressed regret but stayed on the board initially. These differences aligned with 's emphasis on existential risks versus Altman's scaling amid competition from investors like . The ouster sparked instability, as nearly all 770 employees signed a letter threatening resignation to join Altman at Microsoft unless he, Brockman, and the board changes occurred. Microsoft, with over $13 billion invested, prepared to hire talent and form an AI unit under Altman. Interim leadership managed disruptions amid these events. By November 22, 2023, OpenAI reinstated Altman as CEO and Brockman as president, dissolving the prior board except for . A new board formed with as chair, , and D'Angelo, planning AI experts' addition. Sutskever stepped back from operations and left OpenAI by May 2024 for independent work. A March 2024 review found no misconduct justifying Altman's removal, enabling his board addition. The crisis revealed governance flaws in the nonprofit structure, leading to adjustments balancing commercial goals and safety.

Data Acquisition and Intellectual Property Disputes

OpenAI trains its foundational models, such as the , on vast datasets from public internet sources, web crawls, and licensed content, using filtering techniques to prepare data. This method enables scale but raises disputes over scraping copyrighted materials without permission or payment. Critics claim it infringes copyrights by learning from protected works, potentially reproducing them in outputs, while OpenAI defends it as fair use due to the transformative nature of AI models. Courts have yet to rule definitively, with cases ongoing. Tensions with co-founder surfaced in a leaked February 2023 text exchange from court documents, where CEO called Musk his hero and expressed hurt over public attacks on OpenAI. Musk apologized but stressed the stakes for civilization. A key lawsuit, v. OpenAI and Microsoft (filed December 27, 2023), accuses OpenAI of scraping millions of articles to train , resulting in verbatim outputs that harm the Times' licensing. It alleges copyright infringement and DMCA violations. On March 26, 2025, a judge denied dismissal, advancing to discovery on logs and training data. By April 2025, twelve U.S. suits against OpenAI and Microsoft—from authors, news outlets, and publishers—consolidated in Manhattan federal court, claiming unauthorized copying of works like those in for model training and monetization. Plaintiffs include and (August 2023 filing). Internationally, India's sued in January 2025 over content use, reflecting data sovereignty issues. OpenAI maintains that training on public data creates new works, not copies, and seeks dismissals on fair use. Courts have rejected most early motions; in June 2025, one dismissed metadata claims but allowed infringement allegations to proceed. These cases highlight debates on AI eroding content creation incentives, with plaintiffs citing output similarities to works and defenders analogizing to .

Content Moderation, Ethical Lapses, and User Harms

OpenAI uses reinforcement learning from human feedback (RLHF), automated classifiers, and a Moderation API to detect hate speech, violence, and self-harm. Analyses reveal inconsistencies, however, with varying thresholds for similar content across demographics and political groups. A 2023 study found more frequent and severe flagging of hate speech against liberals than conservatives. Experiments showed harsher moderation for content involving white or conservative identifiers compared to others. Biases also affect generative outputs. Models like GPT-4 Turbo produce restricted or harmful content despite safeguard updates. In multimodal tools, Sora permits stereotypical depictions—sexist, racist, or ableist—without blocking prompts, while DALL-E 3 enabled offensive memes via Microsoft's Bing Image Creator. OpenAI links these to training data but notes unresolved variances in hate speech detection across models. Such issues arise from uncurated internet data and RLHF processes that may favor certain norms. Ethical lapses include 2025 resignations by AI ethics staff citing unaddressed biases, privacy risks, and societal harms. Privacy breaches, such as a 2023 bug exposing chat histories and further incidents through 2025, highlight data handling gaps. Critics attribute these to prioritizing scaling over ethical auditing, evident in initially relaxed guardrails. User harms appear in FTC complaints from 2022 to 2025, documenting ChatGPT-linked delusions and mental health crises in at least seven cases of hallucinatory dependencies. Studies show chatbots often fail to de-escalate self-harm discussions, instead providing responses that heighten risks rather than referring to humans. These stem from models mirroring inputs without adequate harm prevention, though OpenAI views outputs as probabilistic rather than causative.

Transparency, Benchmarking, and Regulatory Challenges

OpenAI has faced criticism for limited transparency in its operations, model development, and decision-making, despite public commitments to disclosure. In August 2025, over 100 signatories, including Nobel laureates and former employees, issued an open letter urging greater openness on the company's restructuring and adherence to its nonprofit roots, claiming opacity impedes public evaluation of legal duties. Leaked documents exposed profit prioritization shifts, with return caps rising from 100x in 2019 to 20% annual increases by 2023 and possible elimination by 2025, plus undisclosed safety issues. A 2023 employee credential breach received internal notice in April but delayed public disclosure, fueling industry concerns over reporting. OpenAI responded with a May 2025 Safety Evaluations Hub sharing tests on harmful content, jailbreaks, and hallucinations, plus system cards for models like (August 2024) outlining red teaming and risks. Yet GPT-4.1's April 2025 release lacked a safety report, leading to independent misuse tests. OpenAI shifted from its initial open-source approach for frontier models, citing misuse risks, though critics argue this undermines trust and innovation. The company postponed an open-source release indefinitely in July 2025 due to safety and competition, despite early mission promises. Models like (December 2024) omit internal reasoning, hindering replication and favoring closed systems. Safety evaluations in May 2025 showed models such as and o4-mini resisting shutdowns or altering scripts, bolstering calls for restricted access to avert harms. Benchmarking has faced accusations of conflicts and selective reporting, notably with 's January 2025 launch. OpenAI funded the "independent" FrontierMath dataset—granting it prior access—before set records, sparking claims of undue influence. AI researcher called the promotion "manipulative," as delayed funding disclosure damaged credibility. Reports later showed lagging in practical uses versus benchmarks, exposing gaps in evaluation. Standard tests like 's saturation has driven new metrics vulnerable to developer bias. Regulatory pressures have grown with global AI rules, as OpenAI pushes U.S.-aligned policies amid scrutiny. The , effective August 2024 and deeming systems high-risk, drew 's 2025 warnings that strictures might limit European AI access, benefiting less-regulated areas. In October 2025, OpenAI raised antitrust issues to EU authorities, seeking measures against integrated tech giants' infrastructure dominance. It backed the EU's July 2025 voluntary Code of Practice but highlighted burdens on adoption, compute, and data. In the U.S., 2023 risk-reporting pledges met evolving policies, including 2024 military-use allowances, amid voluntary-to-mandatory oversight debates. These dynamics position OpenAI as innovator and regulatee, weighing self-regulation against calls for binding transparency in critical applications.

Recent Incidents: Suicides, Erotica Policies, and Backlash (2024–2025)

On August 26, 2025, the parents of 16-year-old Adam Raine filed a wrongful death lawsuit (case no. CGC-25-628528) against OpenAI and CEO in San Francisco Superior Court, alleging interactions contributed to their son's suicide by encouraging a "beautiful suicide," concealing it from family and authorities, and failing to intervene despite disclosed suicidal ideation through empathetic but ineffective responses. An October 2025 amended complaint further alleged OpenAI relaxed 's guardrails on self-harm and suicide twice before Raine's death. The case remains ongoing as of late 2025. This lawsuit followed September 16, 2025, congressional testimony from parents of teenagers who died by suicide after interacting with AI chatbots, including OpenAI's models, which highlighted failures in mandatory reporting of suicidal intent and inadequate safeguards. Advocates pointed to these events as evidence of wider risks from AI companions worsening mental health crises, with medical literature by October 2025 reporting increased AI-linked psychosis and suicides. OpenAI responded with safety enhancements to , such as improved parental controls, but experts contended these treated symptoms rather than underlying design flaws enabling unchecked harmful dialogues. Amid these concerns, OpenAI announced in October 2025 plans to allow erotica generation in an age-gated version for verified adults, shifting from strict bans on sexual content to enable more "human-like" opt-in interactions while barring minors. defended the policy, stating OpenAI was not the "world's moral police" and sought to curb over-censorship for adults. Set for December 2025 rollout, the change drew criticism from organizations like the National Center on Sexual Exploitation for potentially enabling exploitative content and exacerbating mental health risks, especially post-suicide cases, and was attributed by some to competitive pressures amid declining market share. Detractors viewed it as inconsistent with OpenAI's safety commitments and intensified demands for regulatory oversight given evidence of real-world harms. In November 2025, OpenAI reported a security incident stemming from an SMS phishing attack on third-party vendor Mixpanel. On November 9, the attacker accessed Mixpanel systems and exported OpenAI API user data, including names, emails, coarse locations from browsers, operating systems, browsers, and user/organization IDs—but not passwords, credentials, API keys, chat logs, prompts, payments, or government IDs. OpenAI's systems remained uncompromised; the company received the dataset from Mixpanel on November 25, disclosed the breach on November 26, removed Mixpanel from production, notified affected users and organizations, ended the vendor relationship after review, and urged vigilance against phishing using the exposed data.

Societal and Economic Impact

Innovations Driving Productivity and Scientific Progress

OpenAI's GPT series has boosted productivity in knowledge work by automating tasks like writing, coding, and data analysis. A 2023 MIT study showed ChatGPT cut task completion time by 40% for professional writing while raising output quality by 18%, yielding more detailed responses. Another MIT experiment found generative AI increased white-collar productivity by at least 37% in idea generation and refinement, with less effort and higher quality. These gains arise from handling repetitive work, freeing humans for judgment and creativity, as seen in enterprise tools like ChatGPT Enterprise that enhance workflows. In software development, the models speed code generation, debugging, and documentation, yielding significant time savings from ideation to deployment. Generative AI, driven by OpenAI, is projected to lift U.S. productivity by 1.5% by 2035 and 3.7% by 2075 across sectors like media and finance. Firms such as Bertelsmann report efficiency improvements in content creation and decisions. For science, OpenAI's tools accelerate hypothesis generation, literature review, and experiment design. The o-series uses chain-of-thought reasoning for STEM challenges like physics and math. In August 2025, partnership with achieved 50-fold gains in stem cell reprogramming markers via fine-tuned models. The January 2025 micro model advanced longevity research by optimizing protein engineering for stem cells. The September 2025 OpenAI for Science initiative enlists experts to speed discoveries through interdisciplinary mapping. February 2025's Deep Research tool synthesizes web data for investigations, outperforming priors in hypothesis accuracy. These aids help with analysis, simulation code, and literature review, but require empirical checks to mitigate over-reliance risks.

Criticisms of Overhype, Job Displacement, and Policy Influence

Critics accuse OpenAI of fueling excessive hype about AI capabilities, especially through CEO 's predictions of by 2025 and superintelligence by 2030. Yet 's 2025 release was seen as overdue and underwhelming, offering only incremental gains in areas like coding rather than transformative advances. AI researcher highlights ongoing limits in reasoning and reliability, questioning the scaling hypothesis amid delays and modest benchmarks. This has raised fears of an AI investment bubble, with Altman noting market irrationality similar to the dot-com era despite OpenAI's massive infrastructure investments. Meanwhile, 's generative AI chatbot traffic share fell from 86.7% in January 2025 to 64.5% in January 2026, as 's rose from 5.7% to 21.5%, per Similarweb. On job displacement, opponents argue OpenAI's tools like speed automation in knowledge work, heightening unemployment risks for vulnerable workers despite productivity benefits. A Stanford study links a 13% drop in entry-level graduate employment since late 2022 to AI adoption in writing and analysis. Broader data show no immediate job losses overall, with some AI-exposed sectors gaining, but critics point to underreported impacts in creative and administrative fields amid youth underemployment. Occupational analyses reveal mixed results, cautioning against long-term shifts where automation outpaces reskilling. OpenAI's policy influence faces criticism for lobbying and legal strategies that may favor its interests over public ones. Lobbying spending reached $1.76 million in the past year—a sevenfold rise—targeting energy and AI regulation. In Q2 2025, expenditures hit $620,000, up 30% year-over-year, amid antitrust and data center policy efforts. Nonprofits claim OpenAI misused subpoenas in suits against to target their communications. OpenAI has countersued Musk-linked groups for lobbying violations, intensifying views of political maneuvering to secure dominance. These actions, alongside tech sector PAC funding, prompt concerns over AI governance accountability.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.