Artificial intelligence research organization From Wikipedia, the free encyclopedia
OpenAI, Inc. is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[5] As a leading organization in the ongoing AI boom,[6] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[7][8] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
![]() | |
Company type | Private |
---|---|
Industry | Artificial intelligence |
Founded | December 11, 2015 |
Headquarters | San Francisco, California, U.S.[1] |
Key people | |
Products | OpenAI Five |
Revenue | US$3.7 billion[3] (2024 est.) |
US$−5 billion[3] (2024 est.) | |
Number of employees | c. 2,000 (2024)[4] |
Subsidiaries |
|
Website | openai |
The organization consists of the non-profit OpenAI, Inc.,[9] registered in Delaware, and its for-profit subsidiary introduced in 2019, OpenAI Global, LLC.[10] Its stated mission is to ensure that AGI "benefits all of humanity".[11] Microsoft owns roughly 49% of OpenAI's equity, having invested US$13 billion.[12] It also provides computing resources to OpenAI through its cloud platform, Microsoft Azure.[13]
In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem.[14][15]
In December 2015, OpenAI was founded by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research.[16][17] The actual collected total amount of contributions was only $130 million until 2019.[10] According to an investigation led by TechCrunch, while YC Research never contributed any funds, Open Philanthropy contributed $30 million and another $15 million in verifiable donations were traced back to Musk.[18] OpenAI later stated that Musk's contributions totaled less than $45 million.[19] The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public.[20][21] OpenAI was initially run from Brockman's living room.[22] It was later headquartered at the Pioneer Building in the Mission District, San Francisco.[23][24]
According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of the "best researchers in the field".[25] Brockman was able to hire nine of them as the first employees in December 2015.[25] In 2016, OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google.[25]
Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect.[25] OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission."[25] Brockman stated that "the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way."[25] OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.[25]
In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.[26] Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.[27][28] In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.[29][30][31][32]
In 2017, OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone.[33] In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.
In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars.[34] Sam Altman claims that Musk believed that OpenAI had fallen behind other players like Google and Musk proposed instead to take over OpenAI himself, which the board rejected. Musk subsequently left OpenAI.
In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text.[35]
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment.[36] According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company.[37] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[38] Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[39]
The company then distributed equity to its employees and partnered with Microsoft,[40] announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft.[41][42][43]
OpenAI Global, LLC then announced its intention to commercially license its technologies.[44] It planned to spend the $1 billion "within five years, and possibly much faster".[45] Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.[46]
The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated "I disagree with the notion that a nonprofit can't compete" and pointed to successful low-budget projects by OpenAI and others. "If bigger and better funded was always better, then IBM would still be number one."
The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC.[37] In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest.[38] Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[47]
In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply "the API", would form the heart of its first commercial product.[48]
Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic.[49]
In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture.[50]
In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.[52] According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.[53]
In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value.[54] On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure.[55][56] Rumors of this deal suggested that Microsoft may receive 75% of OpenAI's profits until it secures its investment return and a 49% stake in the company.[57] The investment is believed to be a part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information.[58][59]
On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products.[60]
On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI.[61]
On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus.[62]
On May 22, 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence.[63] They consider that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They propose creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also call for more technical safety research for superintelligences, and ask for more coordination, for example through governments launching a joint project which "many current efforts become part of".[63][64]
In July 2023, OpenAI launched the superalignment project, aiming to find within 4 years how to align future superintelligences by automating alignment research using AI.[65]
In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools.[66]
On September 21, 2023, Microsoft had begun rebranding all variants of its Copilot to Microsoft Copilot, including the former Bing Chat and the Microsoft 365 Copilot.[67] This strategy was followed in December 2023 by adding the MS-Copilot to many installations of Windows 11 and Windows 10 as well as a standalone Microsoft Copilot app released for Android[68] and one released for iOS thereafter.[69]
In October 2023, Sam Altman and Peng Xiao, CEO of the Emirati AI firm G42, announced Open AI would let G42 deploy Open AI technology.[70]
On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries.[71] On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand.[72] Access for newer subscribers re-opened a month later on December 13.[73]
![]() | This section is too long. Consider splitting it into new pages, adding subheadings, or condensing it. (November 2024) |
In January 2024, OpenAI announced a partnership with Arizona State University that would give it complete access to ChatGPT Enterprise. It is OpenAI's first partnership with an educational institution.[74] In February, the U.S. Securities and Exchange Commission was reportedly investigating OpenAI over whether company communications made by Altman were used to mislead investors; and an investigation of Altman's statements, opened by the Southern New York U.S. Attorney's Office was ongoing.[75][76]
On February 15, 2024, OpenAI announced a text-to-video model named Sora, which it plans to release to the public at an unspecified date.[77] It is available for red teams for managing critical harms and risks.[78]
On February 29, 2024, OpenAI and CEO Sam Altman were sued by Elon Musk, who accused them of prioritizing profits over public good, contrary to OpenAI's original mission[10] of developing AI for humanity's benefit.[79] The lawsuit cited OpenAI's policy shift after partnering with Microsoft, questioning its open-source commitment and stirring the AI ethics-vs.-profit debate.[80] OpenAI stated that "Elon understood the mission did not imply open-sourcing AGI."[81] It denied being a de facto Microsoft subsidiary.[82] On March 11, in a court filing, OpenAI said it was "doing just fine without Elon Musk" after he left in 2018. They responded to Musk's lawsuit, calling his claims "incoherent", "frivolous", "extraordinary" and "a fiction".[83] In June Musk unexpectedly withdrew the lawsuit,[84] but in August reopened it against Altman and others, alleging Altman claimed OpenAI was going to be founded as a non-profit.[85][86]
On May 15, 2024, Ilya Sutskever resigned and was replaced with Jakub Pachocki to be the Chief Scientist.[87] Jan Leike, the other co-leader of the superalignment team, announced his departure, citing an erosion of safety and trust in OpenAI's leadership.[88] The departures, along with researchers leaving, led OpenAI to absorb the team's work into other research areas, and shut down the superalignment group.[89] According to sources interviewed by Fortune, OpenAI's promise of allocating 20% of its computing capabilities to the superalignment project had not been fulfilled.[90]
On May 19, 2024, Reddit and OpenAI announced a partnership to integrate Reddit's content into OpenAI products, including ChatGPT. This allows OpenAI to access Reddit's Data API, providing real-time, structured content to enhance AI tools and user engagement with Reddit communities. Reddit plans to develop new AI-powered features for users and moderators using OpenAI's platform. The partnership aligns with Reddit's commitment to privacy, adhering to its Public Content Policy and existing Data API Terms, which restrict commercial use without approval. OpenAI will serve as a Reddit advertising partner.[91]
On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over use of their content to train AI models.[92]
On May 29, 2024, Axios reported that OpenAI had signed deals with Vox Media and The Atlantic to share content to enhance the accuracy of AI models like ChatGPT by incorporating reliable news sources, addressing concerns about AI misinformation.[93] Concerns were expressed by journalists and unions. The Vox Union stated, "As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI."[94]
A group of nine current and former OpenAI employees has accused the company of prioritizing profits over safety, using restrictive agreements to silence concerns, and moving too quickly with inadequate risk management. They call for greater transparency, whistleblower protections, and legislative regulation of AI development.[95]
On June 10, 2024, it was announced that OpenAI had partnered with Apple Inc. to bring ChatGPT features to Apple Intelligence and iPhone.[96] On June 13, 2024, OpenAI announced that Paul Nakasone, the former head of the NSA was joining its board. Nakasone also joined the security subcommittee.[97]
On June 24, 2024, OpenAI acquired Multi, a startup running a collaboration platform based on Zoom.[98]
In July 2024, Reuters reported that OpenAI is working on a project to enhance AI reasoning capabilities, and to enable AI to plan ahead, navigate the internet autonomously, and conduct "deep research".[99][100] The project was released on September 12 and named o1.[101]
On August 5, TechCrunch reported that OpenAI's cofounder John Schulman had left to join rival startup Anthropic. Schulman cited a desire to focus more on AI alignment research. OpenAI's president and co-founder, Greg Brockman, took extended leave till November.[102]
In September 2024, OpenAI's global affairs chief, Anna Makanju, expressed support for the UK's approach to AI regulation during her testimony to a House of Lords committee, stating the company favors "smart regulation" and sees the UK's AI white paper as a positive step towards responsible AI development.[103] Chief Technology Officer (CTO) Mira Murati announced her departure from the company to "create the time and space to do my own exploration".[104] It had been reported Murati was among those who expressed concerns to the Board about Altman.[105]
In October 2024, OpenAI raised $6.6 billion from investors, potentially valuing the company at $157 billion. The funding attracted returning venture capital firms like Thrive Capital and Khosla Ventures, along with major backer Microsoft and new investors Nvidia and SoftBank.[106][107] OpenAI's CFO, Sarah Friar, informed employees that a tender offer for share buybacks would follow the funding, although specifics were yet to be determined. Thrive Capital invested around $1.2 billion, with the option for an additional $1 billion if revenue goals were met. Apple, despite initial interest, did not participate in this funding round.[108]
Also in October 2024, The Intercept revealed that OpenAI's tools were considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the Department of Defense and Microsoft.[109]
In November 2024, OpenAI acquired the long-standing domain Chat.com and redirected it to ChatGPT's main site.[110][111] Moreover, Greg Brockman rejoined OpenAI after a three-month leave from his role as president. An OpenAI spokesperson confirmed his return, highlighting that Brockman would collaborate with Altman on tackling key technical challenges. His return followed a wave of high-profile departures, including Mira Murati and Ilya Sutskever, who had since launched their own AI ventures.[112]
In December 2024, OpenAI launched several significant features as part of its "12 Days of OpenAI" event, which started on December 5. It announced Sora, a text-to-video model intended to create realistic videos from text prompts, and available to ChatGPT Plus and Pro users.[113][114] Additionally, OpenAI launched the o1 model, which is designed to be capable of advanced reasoning through its chain-of-thought processing, enabling it to engage in explicit reasoning before generating responses.[115][116] This model is intended to tackle complex tasks with improved accuracy and transparency. Another major release was ChatGPT Pro, a subscription service priced at $200 per month that provides users with unlimited access to the o1 model and enhanced voice features.[117] OpenAI shared preliminary benchmark results for the upcoming o3 model.[118] The event also saw the expansion of the Canvas feature, allowing all users to utilize side-by-side digital editing capabilities.
On January 21, 2025, it was announced that OpenAI, Oracle, SoftBank and MGX would launch The Stargate Project, a joint venture to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The project will be funded over the next four years.[119]
On January 24, OpenAI made Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users, available to Pro users in the U.S.A. only.[120][121]
On February 2, OpenAI made deep research agent, that achieved an accuracy of 26.6 percent on Humanity's Last Exam (HLE) benchmark, available to $200-monthly-fee paying users with up to 100 queries per month, while more “limited access” was promised for Plus, Team and later Enterprise users.[122]
On February 10, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI and was willing to match or exceed any better offers.[123][124]
In February 2025, OpenAI underwent a rebranding with a new typeface, word mark, symbol and palette.[125] OpenAI began collaborating with Broadcom in 2024 to design a custom AI chip capable of both training and inference targeted for mass production in 2026 and to be manufactured by TSMC in 3 nm node. This initiative is intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. [126]
Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI gains the ability to redesign itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Co-founder Musk characterizes AI as humanity's "biggest existential threat".[135]
Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence.[136][137] OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".[21] Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach."[138] OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."[21] Co-chair Sam Altman expects the decades-long project to surpass human intelligence.[139]
Vishal Sikka, former CEO of Infosys, stated that an "openness", where the endeavor would "produce results generally in the greater interest of humanity", was a fundamental requirement for his support; and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".[140] Cade Metz of Wired suggested that corporations such as Amazon might be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook, which own enormous supplies of proprietary data. Altman stated that Y Combinator companies would share their data with OpenAI.[139]
In the early years before his 2018 departure, Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." He acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; but nonetheless, that the best defense was "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."[127]
Musk and Altman's counterintuitive strategy—that of trying to reduce the potential harm of AI by giving everyone access to it—is controversial among those concerned with existential risk from AI. Philosopher Nick Bostrom said, "If you have a button that could do bad things to the world, you don't want to give it to everyone."[137] During a 2016 conversation about technological singularity, Altman said, "We don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board". Greg Brockman stated, "Our goal right now ... is to do the best thing there is to do. It's a little vague."[141]
Conversely, OpenAI's initial decision to withhold GPT-2 around 2019, due to a wish to "err on the side of caution" in the presence of potential misuse, was criticized by advocates of openness. Delip Rao, an expert in text generation, stated, "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous." Other critics argued that open publication was necessary to replicate the research and to create countermeasures.[142]
More recently, in 2022, OpenAI published its approach to the alignment problem, anticipating that aligning AGI to human values would likely be harder than aligning current AI systems: "Unaligned AGI could pose substantial risks to humanity[,] and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together". They stated that they intended to explore how to better use human feedback to train AI systems, and how to safely use AI to incrementally automate alignment research.[143]
In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers.[144][145] OpenAI also planned a restructuring to operate as a for-profit company. This restructuring could grant Altman a stake in the company.[146]
At its beginning, OpenAI's research included many projects focused on reinforcement learning (RL).[147] OpenAI has been viewed as an important competitor