NVIDIA Closes Mellanox Acquisition, Adds High-Speed Networking to Tech Portfolio
by Ryan Smith on April 27, 2020 11:00 AM EST- Posted in
- GPUs
- Networking
- IT Computing
- NVIDIA
- Mellanox
- InfiniBand
Just over a year ago, NVIDIA announced its intentions to acquire Mellanox, a leading datacenter networking and interconnect provider. And, after going through some prolonged regulatory hurdles, including approval by the Chinese government as well as a waiting period in the United States, NVIDIA has now closed on the deal as of this morning. All told, NVIDIA is pricing the final acquisition at a cool 7 billion dollars, all in cash.
Overall, in the intervening year, NVIDIA’s reasoning for acquiring the networking provider has not changed: the company believes that a more vertically integrated product stack that includes high-speed networking hardware will allow them to further grow their business, especially as GPU-powered supercomputers and other HPC clusters get more prominent. To that end, it’s hard to get more prominent than Mellanox, whose Ethernet and Infiniband gear is used in over half of the TOP500-listed supercomputers in the world, as well as countless datacenters.
Ultimately, acquiring the company not only gives NVIDIA leading-edge networking products and IP, but it will also allow them to exploit the advantages of being able to develop in-house the high-performance interconnects needed to allow their own high-performance compute products to better scale. NVIDIA already has significant dealings with Mellanox, as the company’s DGX-2 systems incorporate Mellanox’s controllers for multi-node scaling. As well, Mellanox’s hardware is used in both the Summit and Sierra supercomputers, both of which are also powered by NVIDIA GPUs. So this acquisition is in many respects just the latest expansion in NVIDIA’s ongoing efforts to grow their datacenter presence.
Source: NVIDIA
20 Comments
View All Comments
imaheadcase - Monday, April 27, 2020 - link
Every time i hear something like "interconnect provider" i just think "So they run cables". hehementor07825 - Monday, April 27, 2020 - link
Well...yes.But then again, the internet is just a series of tubes.
surt - Monday, April 27, 2020 - link
It always bugged me that that statement got so much criticism, given that it was entirely true.RadiclDreamer - Monday, April 27, 2020 - link
Sort of....I think it was more the words that came after that made him a laughing stock. Watch the entire speech before making a final judgement on it.
rahvin - Monday, April 27, 2020 - link
It was appropriately ridiculed.I have no issue with using that as a greatly simplified example to explain how the internet works to him given his age, but for him to puppet it on the floor of the senate and in legislation was the point where he stepped into the path of ridicule, rightly deserved ridicule.
LiKenun - Monday, April 27, 2020 - link
They lay pipe.Crazyeyeskillah - Tuesday, April 28, 2020 - link
This guy Networks.brucethemoose - Monday, April 27, 2020 - link
So... Teslas with built-in infiniband ports?edzieba - Monday, April 27, 2020 - link
Sounds like the idea. If you can just dump your GPUs straight into your network, then why bother buying a bunch of those pesky CPUs just to act as GPU hosts?brucethemoose - Monday, April 27, 2020 - link
Maybe? I was picturing a port stuck on a regular PCIe accelerator, to conserve PCIe slots and cut inter node GPU to GPU latency.Most projects need some kind of traditional CPU/OS as a host. IDK about HPC workloads, but those folk seem to like big CPUs too.
I can picture Nvidia eventually selling embedded (or socketed?!) Teslas as full blown system on packages some day, complete with DDRX slots and other I/O. That would be one heck of an undertaking though.