OpenAI has signed a monumental $38 billion agreement with Amazon Web Services (AWS), giving the AI innovator access to Amazon’s global data-centre network and hundreds of thousands of cutting-edge Nvidia AI chips. The agreement marks a strategic shift in how OpenAI will scale its infrastructure to power next-generation models and support its expanding user base.
Under the terms of the deal, OpenAI will begin utilizing AWS’s high-performance compute clusters almost immediately, with full deployment targeted by the end of 2026 and further expansion into 2027 and beyond. AWS will provide the backbone infrastructure required to train and run OpenAI’s advanced language and multimodal AI systems at unprecedented scale.
This partnership comes amid a broader push by OpenAI to diversify its cloud and hardware suppliers, moving beyond its historical dependence on competitors and seeking the scale required to support the emerging frontier of AI. For AWS, the arrangement strengthens its positioning as a key provider in the booming AI infrastructure market.
Industry analysts note the deal underscores two important trends: the relentless growth in demand for computing power driven by large-scale AI models, and the intensifying competition among major cloud providers to host those workloads. The deal also raises questions about how such massive infrastructure investments will be funded and monetised, especially for a company still scaling its commercial revenue.







