## Blackwell GPUs and LLMs: A Cost Analysis A new study published on arXiv analyzes the costs associated with running large language models (LLMs) locally using Nvidia's new Blackwell GPUs. The research compares this solution with the use of cloud API services, also taking into account the amortization costs of the GPUs. Preliminary findings indicate that, under certain conditions, running LLMs locally on Blackwell GPUs could represent a more cost-effective alternative compared to using cloud services. This data could provide support for a change in strategy for companies evaluating the implementation of LLMs, moving towards proprietary hardware infrastructures. Of course, the choice between cloud and local infrastructure depends on numerous factors, including model size, request volume, latency requirements, and internal expertise. However, this analysis offers an interesting starting point for evaluating the economic benefits of using latest-generation GPUs such as Blackwell.