I think the article understates how big of a deal this is. It mentions a lot about the implications from the perspective of packaging, but this opens up a whole new world for IO!
Right now, most semiconductor companies who make an SoC for whatever purpose, will often license designs for common things like a DDR controller or a PCIe interface. But in literally every architecture meeting I've ever sat in ever, the senior manager bigwig always asks: "has this design been tested in silicon yet?" And for a very good reason! No one wants to make a chip only to find out that there's a show stopping errata on the new bleeding edge Ethernet controller design. So the architecture team inevitably settles on the current generation IO protocol, which turns into previous generation when the chip is product ready.
But if your IO controller controller is instead on a separate chiplet, that risk of show stopping errata becomes less impactful (at increased the cost of packaging.) The vendor can respin, or you can choose a different vendor all together. It also allows for greater flexibility in vendor selection and SKU offerings, as well as yield improvements. But I think those upsides are a bit more obvious.
Unfortunately, it could make the business of selling IPs like memory and IO controllers that much more cutthroat, since the cost of changing vendors dramatically decreases. But then again maybe not, since a respin would barely (directly) cost the customer any money.
Is there any point in standardizing on an industry-wide interconnect when it seems unlikely that there will ever be a multi-chiplet module which includes components from multiple vendors?
It seems like interconnects are something you'd want to make as fast as possible in order to stay competitive. Standardizing on a technology not controlled by your company is likely to slow down your innovation. At best, this seems like a helping hand to smaller companies like Marvell which might not have the resources to design a competitive interconnect solution themselves.
There are companies today building various types of "multi-chip-modules" (MCM) and "systems-in-package" (SiP) that mix chips and passives from different vendors.
- Octavo Systems puts Ti SOC's and PMIC's in a package with outside vendor DRAM and ROM chips.
- NXP builds some auto parts with a mix of NXP chips on different process nodes and third party flash.
I spent a few years designing these types of systems. In my mind the the biggest value of a standard like this is to push companies to make it worth their while to sell "known-good-die" (KGD) or fully-tested die.
Modern chip manufacturing basically has a two part test flow - a partial test of each die on the wafer then after slicing and packaging each part goes through final test. Packaging is expensive and so imagine building a multi chip module which might have an SOC, DRAM, PMIC and FLASH bare dies and then it goes through final test and each of those dies had a 90% final test yield then the MCM might only have a 65% yield. That's a lot of money down the drain.....
If I built the same part using fully tested die then my yields should be closer to 95-99% - basically the failures are not due to bad silicon, but bad packaging....
Right now it's tough to get companies to spend the resources on a KGD test program because the market is just too small/scattered. A standard helps to build a market.
With that said - I haven't seen this standard yet. I just requested a copy. From the little info available on the website (https://www.uciexpress.org/) it sounds like it's just an IO interconnect standard - along the lines of something like MIPI DSI + D-PHY. That's a good start but the dream of "lego" like chips will require standardized foot prints and pinouts which may just never be feasible but there are people exploring this idea....
Will chiplet vendors be selling actual physical chiplets, or still selling IP, but IP that has been fully validated through to manufacturing?
If it's physical chiplets, are there any integration problems that come from trying to combine chiplets from different fabs in a single package? Variations in chemistry? How do you safely and cleanly transport unencapsulated chiplets?
For the simplicity of this discussion let's just focus on companies selling physical chiplets.
Assuming the company follows normal manufacturing processes (and assuming that company is "fabless" then this is a pretty safe assumption) then there is no risk of mixing different silicon from different vendors/fabs in a single package. This is being done today. Currently designers do have to take into account mixing parts that were designed for various package technologies. If a chip was designed for "flip-chp" and another chip for "wire-bond" then putting them both in the same package might require some different packaging techniques...
That said - I'd assume if a company was going to build a chip and specifically market it for this new UCIe standard then they'd probably target a WLCSP type final process which is fairly common today.
Your premise that it it is unlikely that there will be multi-vendor chiplets on a single chip is undermined by the industry coming together and creating a standard to use each other’s chiplets in a single chip.
Marvell which might not have the resources to design a competitive interconnect solution themselves.
I guess it depends on what you mean by interconnect. If you’re talking about >100GbE then the Marvell has a large market share in interconnect market for high speed Ethernet.
Marvell has nothing dense enough to meet these requirements. We're looking at 100GT/s, sure, but across millimeters, with physical connections measured in micrometers. It's a completely different technology to what Marvell makes.
128GT/s in PCIE G4 is only 125Gb/s. Marvell has serial interfaces that do this today for Ethernet. Since PCIE is a multilane technology these lanes are quite low speed by comparison. Many folks who have these interfaces don’t have the expertise to produce the interface and actually purchase the IP from a 3rd party.
Regarding pitch I’m not sure what your comment is based on but it’s doubtful you’re correct since pitch tends to be limited by the XTALK considerations not by fabrication. Since we are all using the same fabs we can all achieve the same minimum pitch supported by the ecosystem. Having said that even the pitch at these speeds is measured in microns and has been for a long time.
I think what you probably mean is that Marvell’s devices use a lot of DSP and FEC for use in Ethernet. This is because those interfaces are much higher speed and include impairment compensation for both electrical and optical channels. The DSP and FEC add latency which in compute applications would be problematic.
While not mentioned in the article, and also obviously not using the new standard, Apple has been putting multiple dies on a single package too lately haven’t they? Isn’t that how RAM is done on the various M-series chips?
This part isn't new, or why this is significant. For example, AMD has been doing this since Zen 2 (Ryzen 2xxx), combining 14nm and 7nm components.
On the other hand, the top chip designers and foundries working together and standardizing is potentially very significant, though I reserve judgement until actual products are produced on this new standard.
They are not. It is the same as how you normally put the DRAM together just inside the same package. It is called System in Package. The same way as Apple has done for their Apple Sx Chip on Apple Watch for years.
Does NVIDIA have designs that they integrate on someone else's SoC (or vice versa)? If not that's probably why - they may not have a need for an interconnect specification like this.
I know you meant SiP or ASIC based on the other comment. But for wafers there are some developments too, wafer scale has been tried, though there are various eg power and economic challenges for easy adoption: https://www.eetimes.com/startup-spins-whole-wafer-for-ai/
Right now, most semiconductor companies who make an SoC for whatever purpose, will often license designs for common things like a DDR controller or a PCIe interface. But in literally every architecture meeting I've ever sat in ever, the senior manager bigwig always asks: "has this design been tested in silicon yet?" And for a very good reason! No one wants to make a chip only to find out that there's a show stopping errata on the new bleeding edge Ethernet controller design. So the architecture team inevitably settles on the current generation IO protocol, which turns into previous generation when the chip is product ready.
But if your IO controller controller is instead on a separate chiplet, that risk of show stopping errata becomes less impactful (at increased the cost of packaging.) The vendor can respin, or you can choose a different vendor all together. It also allows for greater flexibility in vendor selection and SKU offerings, as well as yield improvements. But I think those upsides are a bit more obvious.
Unfortunately, it could make the business of selling IPs like memory and IO controllers that much more cutthroat, since the cost of changing vendors dramatically decreases. But then again maybe not, since a respin would barely (directly) cost the customer any money.