As part of the partnership, OpenAI will be able to immediately run its advanced AI workloads on AWS’s world-class infrastructure.
- The multi-year partnership provides OpenAI with immediate and growing access to AWS’s world-class infrastructure for advanced AI workloads.
- AWS will provide OpenAI with Amazon EC2 UltraServers with hundreds of thousands of chips, enabling scalability to tens of millions of CPUs for advanced productive AI workloads.
- This agreement, representing a $38 billion commitment, will enable OpenAI to rapidly increase its computing capacity while leveraging AWS’s price, performance, scalability, and security advantages.
- Amazon Web Services (AWS) and OpenAI have announced a multi-year strategic partnership that will enable OpenAI to immediately run and scale its core AI workloads on AWS’s world-class infrastructure. Under this $38 billion agreement, which will grow over seven years, OpenAI will gain access to AWS processing power comprised of hundreds of thousands of cutting-edge NVIDIA GPUs, capable of expanding to tens of millions of CPUs, to rapidly scale agent-based workloads. AWS’s leadership in cloud infrastructure, combined with OpenAI’s pioneering innovations in generative AI, will enable millions of users to continue to derive value from ChatGPT.
The rapid advancement of AI technology has created an unprecedented demand for processing power. Companies developing frontier models are leveraging the performance, scale, and security offered by AWS to take their models to higher levels of intelligence. OpenAI will begin using AWS processing power immediately under this partnership. The full capacity is planned to be operational by the end of 2026, with expansion continuing through 2027 and beyond.
The infrastructure built by AWS for OpenAI features an advanced architecture optimized for maximum AI processing efficiency and performance. Clustering NVIDIA GPUs (GB200 and GB300) on the same network via Amazon EC2 UltraServers provides low-latency performance across systems, allowing OpenAI to execute its workloads with maximum efficiency. These clusters are designed to support a variety of workloads, from inference for ChatGPT to training next-generation models, and are flexible to meet OpenAI’s evolving needs.
OpenAI Co-Founder and CEO Sam Altman said: “AI requires massive and reliable processing power to scale. Our collaboration with AWS supports the broad compute ecosystem that will power this new era and make advanced AI accessible to everyone.”
AWS CEO Matt Garman said, “As OpenAI continues to push the boundaries of what’s possible, AWS’s world-class infrastructure will be the backbone of their AI vision. The breadth and immediate availability of our optimized processing power clearly demonstrate why AWS is uniquely positioned to support OpenAI’s massive AI workloads.”
This development marks a significant milestone in the two companies’ joint efforts to deliver cutting-edge AI technologies to organizations worldwide. Earlier this year, OpenAI’s open weighted foundation models became available on Amazon Bedrock, providing additional model options to millions of customers on AWS. OpenAI has become one of the most popular open model providers on Amazon Bedrock. Thousands of customers, including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health, use these models for agency-based workflows, coding, scientific analysis, mathematical problem solving, and more.
To start using OpenAI’s open weight models on Amazon Bedrock, visit https://aws.amazon.com/bedrock/openai.

