OpenAI has made its new open-source AI model, called GPT-OSS, freely available for download. This model, the company’s first open-source release in six years, can be run on personal computers, customized by users, and used for commercial purposes.
GPT-OSS Officially Launched
GPT-OSS comes in two versions: 120 billion parameters and 20 billion parameters. The 120 billion model can be run on a single Nvidia GPU and offers similar performance to OpenAI’s existing o4-mini model. The smaller 20 billion model requires only 16 GB of memory and offers similar performance to the o3-mini model.

Both versions are available today for download on platforms such as Hugging Face, Databricks, Azure, and AWS. The models are released under the Apache 2.0 license, allowing the code to be used, modified, and distributed in commercial projects.
Prior to the launch of ChatGPT, OpenAI had avoided open models, citing security concerns. CEO Sam Altman stated earlier this year that they were “on the wrong side of history” by not sharing open models.
During this period, developers had turned to open models due to their lower costs and flexible structures. Following the rise of DeepSeek in January, discussions within OpenAI led to a rethink of this strategy.
The GPT-OSS model, shared today, is supported by OpenAI with functions such as web crawling, coding, reasoning, and managing software agents through APIs. Chris Cook, one of the OpenAI researchers, stated at the press conference that the majority of the company’s customers are already using open models. Cook stated that GPT-OSS addresses this need.
In terms of security, GPT-OSS is described as the most comprehensively tested model OpenAI has ever tested. The company states that it is conducting tests with external security firms to prevent the model from being misused in risky areas such as cybersecurity and biological weapons.
The model’s “chain of thought”—the process that determines how it answers questions—is made visible, allowing it to track potential abuse, deceptive behavior, and errors. The model’s output is limited to text. As with all other OpenAI models, the dataset on which GPT-OSS was trained is not shared.
OpenAI co-founder Greg Brockman commented on the model’s performance, saying, “They’re really powerful models. The team did a great job this time.” The company hasn’t released a timeline for future releases of GPT-OSS.
However, the target audience is reportedly small developers and companies who want more control. Brockman emphasized in his statement that they believe lowering the access threshold increases innovation: “If you let people try, they’ll do amazing things.”
OpenAI hasn’t yet published a benchmark of GPT-OSS’s performance against Llama, DeepSeek, or Google’s Gemma models. However, in internal tests, GPT-OSS reportedly produced similar results to closed-circuit models in coding tasks and tests like Humanity’s Last Exam.