Topic / Trend Rising

Nvidia's Dominance and Strategic Partnerships

Nvidia continues to strengthen its position in the AI market through strategic partnerships and technological advancements. The company is collaborating with various players across the industry, from chip manufacturers to automotive companies, to drive innovation and expand its reach.

Detected: 2026-03-21 · Updated: 2026-03-21

Related Coverage

2026-03-21 DigiTimes

Mercedes-Benz Taiwan adopts Nvidia's Alpamayo platform for new CLA

Mercedes-Benz Taiwan has announced the integration of Nvidia's Alpamayo platform into its new CLA. This strategic move underscores the company's commitment to technological innovation in the automotive sector and paves the way for future expansion in...

#Hardware
2026-03-20 DigiTimes

Commentary: Nvidia sees Groq as its next Mellanox

According to Digitimes, Nvidia may view Groq as a strategic acquisition target, similar to Mellanox, to strengthen its position in the AI inference market. Groq stands out for its Tensor Streaming Processor (TSP) architecture designed for low latency...

#Hardware #LLM On-Premise #DevOps
2026-03-20 DigiTimes

Nvidia Vera Rubin servers to drive liquid cooling demand

Nvidia Vera Rubin servers, designed for intensive workloads, are increasing the demand for liquid cooling systems. This trend is driven by the need to manage the high power density and heat generated by high-performance components, crucial for artifi...

#Hardware #LLM On-Premise #DevOps
2026-03-20 DigiTimes

Volkswagen signals shift away from Nvidia as Chinese chips gain ground

Volkswagen is reportedly considering reducing its reliance on Nvidia, shifting towards chip solutions developed in China. This strategic move reflects a growing confidence in Chinese technological capabilities and a potential diversification of the s...

#Hardware #LLM On-Premise #DevOps
2026-03-19 DigiTimes

Intel pushes large-area AI packaging to close foundry gap

Intel is reportedly betting on advanced packaging to regain AI foundry ground. This strategic move aims to bridge the technological gap and regain market share in the rapidly growing AI sector.

#Hardware #LLM On-Premise #DevOps
2026-03-19 The Register AI

Google says it will let UK publishers opt out of AI Overviews

In response to concerns raised by the UK's competition watchdog (CMA), Google has announced that it will allow UK publishers to opt out of the inclusion of their content in AI Overviews, the summaries generated by artificial intelligence in the searc...

#LLM On-Premise #DevOps
2026-03-19 Tom's Hardware

Thermalright's AIO cooler with panoramic screen for $165

Thermalright offers an all-in-one (AIO) liquid CPU cooler featuring an integrated panoramic display. The Wonder Vision 360 provides an aesthetic and functional option to customize your PC, with a 20% discount available.

2026-03-19 AI News

NVIDIA: Open-source toolkit for safer enterprise AI agents

NVIDIA has introduced an open-source toolkit to simplify the development and deployment of autonomous AI agents in the enterprise. The goal is to provide companies with the tools to control data and liability when using these agents, with a focus on ...

#Hardware #LLM On-Premise #DevOps
2026-03-19 DigiTimes

Amazon Trainium 3: conflicting rumors on release timeline

Conflicting rumors are emerging about the release of Amazon's Trainium 3 chip. While some sources suggest delays, suppliers express optimism. The new chip is designed to accelerate machine learning workloads in the AWS cloud.

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-19 DigiTimes

AMD, Samsung deepen AI chip ties with HBM4 supply and foundry talks

AMD and Samsung are deepening their collaboration in the AI chip sector. The agreement includes the supply of HBM4 memories and discussions on foundry production, crucial elements for the development of high-performance AI solutions.

#Hardware #LLM On-Premise #DevOps
2026-03-19 DigiTimes

AI and memory to drive semiconductor output past US$1 trillion in 2026

According to DIGITIMES, demand for AI and specialized memory will drive the semiconductor market to exceed $1 trillion in 2026. This surge is fueled by the need for advanced hardware to handle increasingly complex artificial intelligence workloads.

#Hardware #LLM On-Premise #DevOps
2026-03-19 DigiTimes

Micron delivers record-breaking 2Q26 financial results driven by AI demand

According to Digitimes, Micron has announced outstanding financial results for the second quarter of 2026, primarily driven by strong demand for artificial intelligence solutions. This highlights the increasing role of AI as a growth engine for memor...

#LLM On-Premise #Fine-Tuning #DevOps
2026-03-19 DigiTimes

Micron delivers record-breaking FQ2 2026 results driven by AI demand

Micron Technology anticipates stronger-than-expected financial results for the second quarter of 2026, driven by robust demand for high-performance memory solutions for artificial intelligence applications. The company is particularly benefiting from...

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-19 DigiTimes

AMD deepens ties with Naver in bid to expand AI infrastructure

AMD is deepening its partnership with Naver, a South Korean tech giant, to expand AI infrastructure. This strategic move aims to capitalize on the growing demand for advanced AI solutions in the Asian market.

#Hardware #LLM On-Premise #DevOps
2026-03-19 DigiTimes

Samsung, AMD expand AI memory and compute partnership with HBM4, DDR5

Samsung and AMD are strengthening their strategic partnership with a memorandum of understanding (MOU) focused on optimizing the supply of HBM4 memory and supporting DDR5 memory, key elements for high-performance artificial intelligence applications.

#Hardware #LLM On-Premise #DevOps
2026-03-19 TechWire Asia

Malaysia's data centre policy: an AI-first approach?

Malaysia has been quietly blocking non-AI data centre projects for nearly two years. The government aims to boost AI investments, but concerns arise about technological sovereignty and grid impact. The strategy seeks to transform Malaysia into an AI ...

2026-03-18 TechCrunch AI

Nvidia's Networking Business: A Multibillion-Dollar Force

Nvidia's networking business generated $11 billion in revenue last quarter, exceeding expectations and establishing itself as a strategic pillar alongside the core GPU and gaming businesses. This growth underscores the importance of high-performance ...

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-18 Tom's Hardware

Nvidia to focus on single 88-core Vera CPU model

Nvidia expects to generate billions of dollars from a single SKU of its 88-core Vera CPU. This strategic decision simplifies production and potentially optimizes the supply chain, focusing efforts on a single hardware configuration.

#Hardware #LLM On-Premise #DevOps
2026-03-18 DigiTimes

Nvidia partners with chipmakers to advance industrial robotics

Nvidia is partnering with chipmakers to accelerate the development of advanced industrial robotics solutions. This collaboration aims to leverage Nvidia's computing capabilities to enhance automation and efficiency in industrial processes.

#Hardware #LLM On-Premise #DevOps
2026-03-18 DigiTimes

Nvidia reportedly preparing Groq AI chips for the Chinese market

Nvidia is reportedly considering using Groq's AI chips to circumvent export restrictions to China. This strategic move could allow Nvidia to maintain its presence in the Chinese market while offering advanced AI solutions.

#Hardware #LLM On-Premise #DevOps
2026-03-18 DigiTimes

Jensen Huang's survival playbook: Nvidia navigates the next AI frontier

Nvidia CEO Jensen Huang is leading the company through the challenges of the AI sector. Nvidia's strategies focus on continuous innovation and adapting to new market needs, maintaining leadership in GPUs and accelerated computing solutions.

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-18 DigiTimes

Samsung, Nvidia, and Groq: Closing the Loop on AI Inference

Samsung collaborates with Nvidia and Groq to optimize performance in AI inference. The synergy between the three giants aims to improve the efficiency and speed of deliveries in artificial intelligence workloads, leveraging their respective expertise...

#Hardware #LLM On-Premise #DevOps
2026-03-18 DigiTimes

Nvidia charts optical-copper path for future interconnects

According to Digitimes, Nvidia is exploring a future where interconnects between hardware components leverage both copper and fiber optics. This strategy could have significant implications for high-performance computing architectures and future data...

#Hardware #LLM On-Premise #DevOps
2026-03-18 DigiTimes

Nvidia H200: Approvals Secured in China, Production Restarted

Nvidia secures approvals and orders for the H200 GPU in China, restarting supply chain production. This development indicates strong demand for high-performance computing solutions in the Chinese market, despite regulatory restrictions.

#Hardware #LLM On-Premise #DevOps
2026-03-18 DigiTimes

Analysis: Nvidia leveraging DeepSeek R1 to cement hardware–model domination

According to DIGITIMES, Nvidia is capitalizing on the success of the DeepSeek R1 language model to strengthen its dominant position in both the hardware and AI model markets. This synergy allows Nvidia to optimize its GPUs for specific workloads, off...

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-17 Tom's Hardware

Nvidia: H200 Shipments to China Resume with US Licenses

Jensen Huang announces the restart of H200 GPU production and shipments to Chinese customers, enabled by new licenses from the US government. A sign of easing tensions in the AI accelerator market.

#Hardware #LLM On-Premise #DevOps
2026-03-17 Tom's Hardware

Jensen Huang responds to DLSS 5 criticism at GTC 2026

At GTC 2026, Nvidia CEO Jensen Huang addressed criticism regarding DLSS 5 technology. Specific details of the response and arguments presented are not detailed in the source, but the event provided a platform to address concerns from users and the ga...

#Hardware #LLM On-Premise #DevOps
2026-03-17 The Register AI

Nvidia aims for space with the Vera Rubin Space-1 Module

Nvidia has designed a module, called Vera Rubin Space-1, intended for data processing directly in space. Despite some industry concerns, the company envisions a future for orbital datacenters and proposes a specific hardware solution to operate outsi...

#Hardware #LLM On-Premise #DevOps
2026-03-17 ServeTheHome

NVIDIA Vera Rubin: AI Inference with GPUs and Groq LPUs

NVIDIA will integrate Groq's LPUs into its Vera Rubin rackscale architecture. This move represents a significant expansion beyond the exclusive use of GPUs for AI inference, opening new possibilities for low-latency workloads. The Vera Rubin platform...

#Hardware #LLM On-Premise #DevOps
2026-03-17 Tom's Hardware

Nvidia Rubin Ultra: World's First AI GPU with 1TB of HBM4E Memory

Nvidia has unveiled Rubin Ultra, the world's first AI GPU featuring 1TB of HBM4E memory. The new chips will slot into Kyber racks, opening new frontiers for performance in large model inference and training. This innovation promises to significantly ...

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-17 DigiTimes

Nvidia GTC 2026: NemoClaw adds security layer to OpenClaw AI agents

Nvidia introduces NemoClaw, a security extension for OpenClaw AI agents. The announcement was made during GTC 2026. NemoClaw introduces an additional layer of protection, crucial for AI applications requiring high security and reliability standards.

#Hardware #LLM On-Premise #DevOps
2026-03-17 Tech in Asia

Hyundai, Kia expand Nvidia partnership for self-driving tech

Hyundai and Kia are strengthening their partnership with Nvidia to develop self-driving technologies. The agreement involves leveraging Nvidia's data platforms and AI technologies to create a unified learning pipeline.

#Hardware #LLM On-Premise #DevOps
2026-03-17 DigiTimes

Samsung unveils HBM4E and comprehensive AI solutions at GTC 2026

Samsung unveiled its HBM4E memory at GTC 2026, highlighting its comprehensive AI solutions and partnership with Nvidia. The Korean company aims to strengthen its position in the rapidly expanding market for high-bandwidth memory, crucial for generati...

#Hardware #LLM On-Premise #DevOps
2026-03-17 DigiTimes

Intel joins GTC, eyes debut of co-developed x86 CPU with Nvidia

Intel will participate in GTC, aiming to present its x86 CPU co-developed with Nvidia. This strategic move could mark a turning point in the processor market, with significant implications for artificial intelligence and high-performance computing wo...

#Hardware #LLM On-Premise #DevOps
2026-03-16 Tom's Hardware

Micron enters high-volume production of HBM4 for Nvidia Vera Rubin

Micron has announced the start of high-volume production of HBM4, intended for the Nvidia Vera Rubin platform. The new memory offers a 2.3x improvement in bandwidth and a 20% increase in power efficiency compared to previous generations.

#Hardware
2026-03-16 TechCrunch AI

Nvidia expects $1 trillion in orders for Blackwell and Vera Rubin

Nvidia CEO Jensen Huang expects orders for the new Blackwell and Vera Rubin architectures to reach $1 trillion. This forecast underscores the strong demand for compute accelerators for artificial intelligence and high-performance computing workloads.

#Hardware #LLM On-Premise #DevOps
2026-03-16 The Register AI

Nvidia's DLSS 5: AI-powered realism for game characters

Nvidia's latest DLSS generation aims to enhance the realism of video game characters, mitigating the 'uncanny valley' effect and elevating graphics to a new level of detail and naturalness.

#Hardware #LLM On-Premise #DevOps
2026-03-16 Tom's Hardware

Jensen Huang expects Nvidia to sell $1 trillion of AI hardware through 2027

Nvidia CEO Jensen Huang estimates the company will sell $1 trillion worth of AI hardware by 2027. This forecast reflects the increasing demand for computing power to support the development and deployment of increasingly complex AI models, particular...

#Hardware #LLM On-Premise #DevOps
2026-03-16 Tom's Hardware

Nvidia's Nemotron Coalition: Building Open Frontier Models

Nvidia announced the Nemotron coalition, bringing together eight AI labs to develop open-source frontier models. The initiative aims to foster innovation and collaboration in the field of AI, with a focus on advanced and accessible models.

#Hardware #LLM On-Premise #DevOps
2026-03-16 The Register AI

Nvidia integrates Groq tech into LPX racks for accelerated AI inference

Nvidia will leverage Groq's Language Processing Units (LPUs), acquired for $20 billion, to enhance the inference performance of its Vera Rubin rack systems. The goal is to accelerate response times for artificial intelligence applications.

#Hardware #LLM On-Premise #DevOps
2026-03-16 Tom's Hardware

Nvidia GTC 2026: Sneak Peek at Next-Gen GPUs

A preview of the future of computational acceleration: what to expect from the Nvidia GTC 2026 conference in terms of new GPU architectures and advances in artificial intelligence. The event remains a benchmark for industry professionals.

#Hardware #LLM On-Premise #Fine-Tuning
2026-03-16 AI News

NTT DATA and NVIDIA: Enterprise AI Factories at Production Scale

NTT DATA launches NVIDIA-powered platforms to scale AI, integrating GPUs, high-performance networking, and NVIDIA AI Enterprise software (NeMo and NIM Microservices). The goal is to standardize output and reduce time/costs from proof-of-concept to op...

#Hardware #LLM On-Premise #DevOps
← Back to All Topics