Samsung Develops 12-Layer 3D TSV DRAM: Up to 24 GB HBM2
by Anton Shilov on October 7, 2019 3:00 PM ESTSamsung on Monday said that it had developed the industry’s first 12-layer 3D packaging for DRAM products. The technology uses through silicon vias (TSVs) to create high-capacity HBM memory devices for applications that benefit from high memory bandwidth and capacities, such as higher-end graphics, FPGAs, and compute cards.
Samsung’s 12-layer DRAM KGSDs (known good stack die) will feature 60,000 TSV holes which is why the manufacturer considers its technology one of the most challenging packaging for mass production. Despite increase of the number of layers from eight to 12, thickness of the package will remain at 720 microns, so Samsung’s partners will not have to change anything on their side to use the new technology. It does mean that we're seeing DRAM layers getting thinner, with acceptable yields for high-end products.
One of the first products to use Samsung’s 12-layer DRAM packaging technology will be the company’s 24 GB HBM2 KGSDs that will be mass produced shortly. These devices will allow developers of CPUs, GPUs, and FPGAs to install 48 GB or 96 GB of memory in case of 2048 or 4096-bit buses, respectively. It also allows for 12 GB and 6 GB stacks with less dense configurations.
Samsung did not disclose how much will 12-layer 24 GB HBM2 devices cost, but since they will be available exclusively from Samsung, we expect the manufacturer to charge a premium.
Here is what Hong-Joo Baek, executive vice president of TSP (Test & System Package) at Samsung Electronics, had to say:
“Packaging technology that secures all of the intricacies of ultra-performance memory is becoming tremendously important, with the wide variety of new-age applications, such as artificial intelligence (AI) and High Power Computing (HPC). As Moore's law scaling reaches its limit, the role of 3D-TSV technology is expected to become even more critical. We want to be at the forefront of this state-of-the-art chip packaging technology.”
Related Reading
- Samsung HBM2E ‘Flashbolt’ Memory for GPUs: 16 GB Per Stack, 3.2 Gbps
- JEDEC Updates HBM Spec to Boost Capacity & Performance: 24 GB, 307 GB/s Per Stack
- Samsung Starts Production of HBM2 “Aquabolt” Memory: 8 GB, 2.4 Gbps
Source: Samsung
11 Comments
View All Comments
austinsguitar - Monday, October 7, 2019 - link
something something, make pcie 4 consumer ssd's already, something something idk why im even commenting.osteopathic1 - Monday, October 7, 2019 - link
I don't know why you are commenting either.1_rick - Monday, October 7, 2019 - link
Especially when PCIe 4 consumer SSDs are out. I saw an Inland 1TB PCIe 4 x4 for $170 in stock at Micro Center. Amazon has it for $10 more. There's at least two other ones you can find on shelves.austinsguitar - Tuesday, October 8, 2019 - link
ayyy you missed the point. samsung pcie 4 ssd's. samsung is really the only people i trust my storage with. and i think others will agree with that statement.AshlayW - Tuesday, October 8, 2019 - link
No, not really. That sounds like misguided brand loyalty to me.austinsguitar - Tuesday, October 8, 2019 - link
The current iteration of the pcie 4 phison controller is awful man. Those ssd's are hotter than and less stable than the kim kardashian family. This is a fact. Samsung is the leader in nvme ssd's. It's not just some funny meme i'm making here. The later they get into this, the more I worry they may not.FullmetalTitan - Wednesday, October 9, 2019 - link
The successor to the Phoenix memory controller is in production, but I think it is being relegated to data center drives for now. I expect to see a consumer SSD with the new controller by CES 2020, but hopefully it will be announced soonerKevin G - Monday, October 7, 2019 - link
I'm still waiting for next generation GPUs to leverage the 16 GB HBM2 stacks. It has been about a year since we last saw a new GPU utilizing HBM be launched (Vega 20 for data centers). We should be approaching the launch window for a new nVidia card to replace Volta but things are similarly quiet on this front. I would suspect that that would leverage 16 GB stacks currently in production and these 24 GB stacks would be a nice refresh a year later.extide - Monday, October 7, 2019 - link
Yeah you could do 64GB or 96GB with 4 stacks. That's pretty nuts. I bet nvidia will come out with a big 7nm datacenter GPU soon though -- the process is definitely ripe enough now.deil - Tuesday, October 8, 2019 - link
yup, I think next Quadro will do exactly that.