Top Qs
Timeline
Chat
Perspective
Open-source artificial intelligence
Concept of open-source software applied to AI From Wikipedia, the free encyclopedia
Remove ads
Open-source artificial intelligence, as defined by the Open Source Initiative, is an AI system that is freely available to use, study, modify, and share.[1][2] This includes datasets used to train the model, its code, and model parameters, promoting a collaborative and transparent approach to AI development so someone could create a substantially similar result.[3][4]
The debate over what should count as ‘open-source’ given a range of openness among AI projects has been significant. Some large language models touted as open-sourced that only release model-weights (but not training data and code)[5][6] have been criticized as "openwashing"[7] systems that are mostly closed.[8] There are some works and frameworks that assess the openness of AI systems as well as a definition by the Open Source Initiative about what constitutes open source AI.[9] Free and open-source software (FOSS) licenses, such as the Apache License, MIT License, and GNU General Public License, outline the terms under which open-source artificial intelligence can be accessed, modified, and redistributed.[10]
The open-source model provides wider access to AI technology, allowing more individuals and organizations to participate in AI research and development.[11] In contrast, closed-source artificial intelligence is proprietary, restricting access to the source code and internal components.[11] Companies often develop closed products in an attempt to keep a competitive advantage in the marketplace.[12] However, some experts suggest that open-source AI tools may have a development advantage over closed-source products and have the potential to overtake them in the marketplace.[12]
Popular open-source artificial intelligence project categories include large language models, machine translation tools, and chatbots.[13] Open-source AI software has been speculated to have potentially increased risk compared to closed-source AI as bad actors may remove safety protocols of public models as they wish.[14] Similarly, closed-source AI has also been speculated to have an increased risk compared to open-source AI due to issues of dependence, privacy, opaque algorithms, corporate control and limited availability while potentially slowing beneficial innovation.[15][8][16]
Remove ads
History
Summarize
Perspective
The history of open-source artificial intelligence is intertwined with both the development of AI technologies and the growth of the open-source software movement.[17][better source needed] Open-source AI has evolved significantly over the past few decades, with contributions from various academic institutions, research labs, tech companies, and independent developers.[18][better source needed] This section explores the major milestones in the development of open-source AI, from its early days to its current state.
1990s: Early development of AI and open-source software
The concept of AI dates back to the mid-20th century, when computer scientists like Alan Turing and John McCarthy laid the groundwork for modern AI theories and algorithms.[19] An early form of AI, the natural language processing "doctor" ELIZA, was re-implemented and shared in 1977 by Jeff Shrager as a BASIC program, and soon translated to many other languages. Early AI research focused on developing symbolic reasoning systems and rule-based expert systems.[20]
During this period, the idea of open-source software was beginning to take shape, with pioneers like Richard Stallman advocating for free software as a means to promote collaboration and innovation in programming.[21] The Free Software Foundation, founded in 1985 by Stallman, was one of the first major organizations to promote the idea of software that could be freely used, modified, and distributed. The ideas from this movement eventually influenced the development of open-source AI, as more developers began to see the potential benefits of open collaboration in software creation, including AI models and algorithms.[22][better source needed][18][better source needed]
In the 1990s, open-source software began to gain more traction,[23][better source needed] the rise of machine learning and statistical methods also led to the development of more practical AI tools. In 1993, the CMU Artificial Intelligence Repository was initiated, with a variety of openly shared software.[24][better source needed]
2000s: Emergence of open-source AI
In the early 2000s open-source AI began to take off, with the release of more user-friendly foundational libraries and frameworks that were available for anyone to use and contribute to.[25][better source needed]
OpenCV was released in 2000[26] with a variety of traditional AI algorithms like decision trees, k-Nearest Neighbors (kNN), Naive Bayes and Support Vector Machines (SVM).[27]
2010s: Rise of open-source AI frameworks
Open-source deep learning framework as Torch was released in 2002 and made open-source with Torch7 in 2011, and was later augmented by PyTorch, and TensorFlow.[28]
AlexNet was released in 2012.[29]
GPT-1 was released in 2018.
In 2018 and accelerating in 2020, China embraced using and building more open AI systems as a way to reduce reliance on western software and gatekeeping as well as to help give its industries access to higher-powered AI more quickly.[30] Projects based in China have since become more widely used around the world as well as they have closed at least some of the gap with leading proprietary American models.[30][31][32]
2020s: Open-weight and open-source generative AI
With the announcement of GPT-2 in 2019, OpenAI originally planned to keep the source code of their models private citing concerns about malicious applications.[33] After OpenAI faced public backlash, however, it released the source code for GPT-2 to GitHub three months after its release.[33] OpenAI did not publicly release the source code or pretrained weights for the GPT-3 model.[34] At the time of GPT-3's release GPT-2 was still the most powerful open source language model in the world. Competition in building more open models included mostly smaller companies like EleutherAI.[35][36] 2022 also saw the rise of larger and more powerful models under licenses of varying openness including Meta's OPT.[37]
The Open Source Initiative consulted experts over two years to create a definition of "open-source" that would fit the needs of AI software and models. The most controversial aspect relates to data access, since some models are trained on sensitive data which can't be released. In 2024, they published the Open Source AI Definition 1.0 (OSAID 1.0).[1][2][3] It requires full release of the software for processing the data, training the model and making inferences from the model. For the data, it only requires access to details about the data used to train the AI so others can understand and re-create it.[2]
In 2023, Llama 1 and 2 and Mistral AI's Mistral and Mixtral open-weight models were first released,[38][39] along with MosaicML's MPT open-source model.[40][41]
In 2024, Meta released a collection of large AI models, including Llama 3.1 405B, which was competitive with less open models.[42] The company claimed its approach to AI would be open-source, differing from other major tech companies.[42] The Open Source Initiative and others stated that Llama is not open-source despite Meta describing it as open-source, due to Llama's software license prohibiting it from being used for some purposes.[43][44][45]
DeepSeek released their V3 LLM in December 2024, and their R1 reasoning model on January 20, 2025, both as open-weights models under the MIT license.[46][47]
Since the release of OpenAI's proprietary ChatGPT model in late 2022, there have been only a few fully open (weights, data, code, etc.) large language models released. In September 2025, a Swiss consortium added to this short list by releasing a fully open model named Apertus.[48][49] Latam-GPT, an open Latin America-focused model, launched in 2025 as a regional effort that trains primarily Spanish and Portuguese-language content.[50][51]
Remove ads
Significance
The label ‘open-source’ can provide real benefits to companies looking to hire top talent or attract customers.[4] The debate around "openwashing” (or calling a project open-source when it is mostly closed) has big implications for the success of various projects within the industry.[7]
Open-source artificial intelligence tends to get more support and adoption in countries and companies that do not have their own leading AI model.[4] These open-source projects can help to undercut the position of business and geopolitical rivals with the strongest proprietary models.[4]
Remove ads
Applications
Summarize
Perspective
Healthcare
In the healthcare industry, open-source AI has been used in diagnostics, patient care, and personalized treatment options.[52] Open-source libraries have been used for medical imaging for tasks such as tumor detection, improving the speed and accuracy of diagnostic processes.[53][52] Additionally, OpenChem, an open-source library specifically geared toward chemistry and biology applications, enables the development of predictive models for drug discovery, helping researchers identify potential compounds for treatment.[54]
Military
Meta's Llama models, which have been described as open-source by Meta, were adopted by U.S. defense contractors like Lockheed Martin and Oracle after unauthorized adaptations by Chinese researchers affiliated with the People's Liberation Army (PLA) came to light.[55][56] The Open Source Initiative and others have contested Meta's use of the term open-source to describe Llama, due to Llama's license containing an acceptable use policy that prohibits use cases including non-U.S. military use.[45] Chinese researchers used an earlier version of Llama to develop tools like ChatBIT, optimized for military intelligence and decision-making, prompting Meta to expand its partnerships with U.S. contractors to ensure the technology could be used strategically for national security.[56] These applications now include logistics, maintenance, and cybersecurity enhancements.[56]
Benefits
Summarize
Perspective
Privacy and independence
A Nature editorial suggests medical care could become dependent on AI models that could be taken down at any time, are difficult to evaluate, and may threaten patient privacy.[15] Its authors propose that health-care institutions, academic researchers, clinicians, patients and technology companies worldwide should collaborate to build open-source models for health care of which the underlying code and base models are easily accessible and can be fine-tuned freely with own data sets.[15]
Collaboration and faster advancements
Large-scale collaborations, such as those seen in the development of open-source frameworks like TensorFlow and PyTorch, have accelerated advancements in machine learning (ML) and deep learning.[57] The open-source nature of these platforms also facilitates rapid iteration and improvement, as contributors from across the globe can propose modifications and enhancements to existing tools.[57]
Democratizing access
Open-source allows countries and organizations that otherwise do not have access to proprietary models a way to use and invest in AI more cheaply.[4]
Transparency
One key benefit of open-source AI is the increased transparency it offers compared to closed-source alternatives.[58][better source needed] The open-sourced aspects of models allow those algorithms and code to be inspected, which promotes accountability and helps developers understand how a model reaches its conclusions.[59][better source needed] Additionally, open-weight models, such as Llama and Stable Diffusion, allow developers to directly access model parameters, potentially facilitating the reduced bias and increased fairness in their applications.[59][better source needed] This transparency can help create systems with human-readable outputs, or "explainable AI", which is a growingly key concern, especially in high-stakes applications such as healthcare, criminal justice, and finance, where the consequences of decisions made by AI systems can be significant.[60][better source needed]
Remove ads
Concerns
Summarize
Perspective
Quality and security
Open-source AI may allow bioterrorism groups to remove fine-tuning and other safeguards of AI models.[14][4] A July 2024 report by the White House found it did not yet find sufficient evidence to restrict revealing model weights.[61]
Once an open-source model is public, it cannot be rolled back or updated if serious security issues are detected.[62][better source needed] The main barrier to developing real-world terrorist schemes lies in stringent restrictions on necessary materials and equipment.[62][better source needed] Furthermore, the rapid pace of AI advancement makes it less appealing to use older models, which are more vulnerable to attacks but also less capable.[62][better source needed]
Researchers have also criticized open-source artificial intelligence for existing security and ethical concerns. An analysis of over 100,000 open-source models on Hugging Face and GitHub using code vulnerability scanners like Bandit, FlawFinder, and Semgrep found that over 30% of models have high-severity vulnerabilities.[63][better source needed] Furthermore, closed models typically have fewer safety risks than open-sourced models.[62][better source needed] The freedom to augment open-source models has led to developers releasing models without ethical guidelines, such as GPT4-Chan.[62][better source needed]
Practicality
Even with truly open-source AI, the cost of training a model oneself can still be prohibitively expensive for many users, unlike other open-source projects that require only downloading code.[4]
Partially open-sourced code that is released with many legal restrictions has scared off some companies from using those projects for fear of a future lawsuit.[4]
Remove ads
See also
Wikimedia Commons has media related to Open source artificial intelligence.
References
External links
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads