Verda Secures $117 Million for AI Infrastructure Powered by Clean Nordic Energy

Verda, a Helsinki-based company specializing in artificial intelligence infrastructure, has announced it has raised $117 million in new funding. This round comprises equity funding led by Lifeline Ventures, with participation from byFounders, Tesi, Varma, and other investors, alongside debt financing from a group of Nordic financial institutions. Founded in 2020, Verda, formerly known as DataCrunch, positions itself as a key player in the AI infrastructure landscape, aiming to meet the growing demand for high-performance computing.

Verda's primary goal is to build AI cloud infrastructure that allows developers and organizations to access high-performance compute on demand and at scale, eliminating the typical complexities of traditional cloud procurement. This approach aligns with the needs of companies seeking greater control and transparency over their AI workloads, a crucial aspect for decision-makers evaluating alternatives to hyperscale cloud providers.

A Vertically Integrated and Sustainable Infrastructure Model

Verda distinguishes itself through its vertical integration, managing the entire stack: from physical servers and data centers to the tools and services teams use to develop AI solutions. This strategy enables end-to-end control over the computing environment, optimizing performance and costs. The company's data centers in Finland operate on 100% renewable energy, leveraging the natural advantages of Nordic countries in terms of clean electricity and cooling efficiency. This focus on sustainability and energy efficiency is an increasingly relevant factor for the TCO (Total Cost of Ownership) of AI infrastructures.

As one of NVIDIA's Preferred Partners globally, Verda provides access to cutting-edge GPU technologies, essential for Large Language Models (LLM) workloads and other intensive AI applications. The company already boasts collaborations with well-known names such as Nokia, 1X, ExpressVPN, and Freepik, demonstrating its ability to support complex requirements. Ruben Bryon, founder and CEO of Verda, emphasizes the importance of this vertical integration: โ€œA big part of our success comes from being vertically integrated, handling everything from physical infrastructure to the application layer.โ€

Sustained Growth and Global Expansion Plans

Verda has demonstrated a remarkable growth trajectory, being already cash-flow positive and having seen its revenue run rate double, exceeding $60 million in the first quarter of 2026. This financial momentum forms the basis for the company's ambitious expansion plans. The newly secured funding will be used to accelerate the development of Verda's AI cloud infrastructure and its international expansion.

Verda plans to hire over 100 people and launch in new markets this year, including the UK and the US, to meet growing global demand. Bryon stated: โ€œWe're building the next generation of AI cloud infrastructure for pioneering teams across the globe. This funding allows us to double down on development and accelerate our expansion across Europe, the US, and Asia.โ€ Verda's approach, which includes a dedicated AI Lab team working directly with customers to drive product decisions, highlights a commitment to customized and responsive market solutions.

Implications for AI Deployment Strategies

For CTOs, DevOps leads, and infrastructure architects, the emergence of players like Verda offers an interesting alternative to traditional AI deployment models. The promise of high-performance, vertically managed infrastructure powered by clean energy addresses several key concerns: from data control and compliance (data sovereignty) to managing operational costs and environmental impact.

Organizations evaluating different deployment strategies for AI workloads, including self-hosted models or specialized cloud solutions, must carefully consider the trade-offs between flexibility, TCO, and specific data sovereignty requirements. Verda positions itself as a solution that seeks to balance these factors, offering a controlled and optimized environment for AI. To delve deeper into the analysis of these trade-offs and evaluate analytical frameworks for on-premise deployments, AI-RADAR offers resources and insights at /llm-onpremise.