Comments Locked

64 Comments

Back to Article

  • Adonisds - Wednesday, December 11, 2019 - link

    What do they mean when they say yield is 80%? Based on a die of what size? Wouldn't it be better to say the number of defects per mm squared?
  • Adonisds - Wednesday, December 11, 2019 - link

    As I continued reading I saw that the article extrapolates the die size and defect rate. But the point of my question is why do foundries usually just say a yield number without giving those other details?
  • Ian Cutress - Wednesday, December 11, 2019 - link

    Headlines. That's why I did the math in the article as you read. Sometimes I preempt our readers questions ;)
  • 0siris - Thursday, December 12, 2019 - link

    And this is exactly why I scrolled down to the comments section to write this comment. Thank you for showing us the relevant information that would otherwise have been buried under many layers of marketing statistics. This is why I still come to Anandtech.
  • Arnulf - Thursday, December 12, 2019 - link

    Very nice read, I wholeheartedly agree!
  • Kishoreshack - Thursday, December 12, 2019 - link

    Thanks for that, it made me understand the article even better
    An article into how these foundaries work, manufacturing process & other aspects would be appreciated
    Readers will get an rough insight on what are difficulties in die manufacturing
  • name99 - Thursday, December 12, 2019 - link

    If we're doing calculations, also of interest is the extent to which design efforts to boost yield work. You mention, for example, that this chip does not utilize self-repair circuitry, whereas presumably commercial chips would, along with a variety of other mechanisms to deal with yield, from the most crude (design the chip with 26 cores, sell something with 24 cores; or design it with 34 banks of L3 and ship it with the best 32 of those 34 enabled) to redundancy on ever smaller scales.

    So the question of interest is what effect does this have on real world shipping. Sure, maybe the naive 100m^2 mobile chip has yields of 32%, but conceivably a more appropriately designed one could have yields up at 60% or more?
  • ZolaIII - Thursday, December 12, 2019 - link

    Because its a commercial drag, nothing more. Yields based on simplest structure and yet a small one. If TSMC did SRAM this would be both relevant & large. For everything else it will be mild at best. Actually mild for GPU's and quite good for FPGA's.
  • Siress - Friday, December 13, 2019 - link

    Yield is a metric used in MFG that transfers a meaningful information related to the business aspects of the technology. It's not useful for pure technical discussion, but it's critical to the business; overhead costs, sustainability, et al.
  • extide - Wednesday, December 11, 2019 - link

    They are saying 1.271 per sq cm. Compare toi 7nm process at 0.09 per sq cm.
  • AnGe85 - Wednesday, December 11, 2019 - link

    Nope, 0.09 is far too low. According to news from the end of October, N7 yields have dropped below 70 %. For a Zen2 chiplet/CCD this would result in something around 0.50 #/cm2.
    (A value of 0.09 #/cm2 would mean an incredible yield of about 93,5 % for a complex logic chip.)
  • AnGe85 - Wednesday, December 11, 2019 - link

    To be precise, the news cited the N7+ (four EUV layers) below 70 % whereas the CCD uses TSMCs N7 (w/o EUV), but you haven't been precise either with "7nm process" ;-)
  • extide - Wednesday, December 11, 2019 - link

    I'm talking N7
  • extide - Wednesday, December 11, 2019 - link

    Nope, 0.09 is straight from TSMC:

    https://twitter.com/realmemes6/status/120387992394...

    https://fuse.wikichip.org/news/2879/tsmc-5-nanomet...
  • Fataliity - Wednesday, December 11, 2019 - link

    That is about the exact yield of fully working Zen2 chiplets. So yes that is correct.
  • name99 - Thursday, December 12, 2019 - link

    You are comparing chip yield WITH corrective design (various redundancy) to RAW yield (no redundancy).

    On the one hand, these are not really comparable.

    On the other hand (as I said above) redundant design is obviously the way people do things. The numbers above suggest that it works astonishingly well, AND presumably it will continue to work as well for 5nm.
  • beginner99 - Thursday, December 12, 2019 - link

    So it's pretty bad right?
  • Urufu - Wednesday, December 11, 2019 - link

    Thank you Ian, a very informative preliminary look at the next phase for TSMC . I appreciate you getting this out there .
  • MrSpadge - Thursday, December 12, 2019 - link

    +1
  • twotwotwo - Wednesday, December 11, 2019 - link

    I don't remember what sent me down the rabbit hole, but yesterday I looked at the EUV article on Wikipedia. It's practically a list of problem after problem implementing it. Fewer, higher-energy photons means far fewer hit a given square nanometer, so statistical noise is a big deal. You need high-power sources because most of the light is lost in the optics. New optical anomalies happen with the much shorter wavelength and the different optical arrangements they need to make EUV work. Dozens more things are mentioned in article (there are >300 cites), and I'm sure there are many more problems that aren't discussed publicly!

    If recent years have taught us anything it's that predictions don't mean much until you have chips in hand, but it's kind of wild that EUV has even gotten this far.
  • mpbello - Wednesday, December 11, 2019 - link

    Well EUV has been in development for decades and some experts in the field even went as far as saying they did not believe EUV would ever work.
  • nathanddrews - Wednesday, December 11, 2019 - link

    It's also likely that the people that know the most about the EUV process are too busy engineering solutions to update wikipedia.
  • none12345 - Thursday, December 12, 2019 - link

    If you haven't yet read about the light source they are using for EUV, then take a look, its pretty wild. The short version would be droplets of molten tin being excited with multiple lasers.
  • anonomouse - Wednesday, December 11, 2019 - link

    The area of an SRAM array is greater than the number of bits * bitcell size. Depending on how large of an array it is, the total amount of control/IO area(decoder, sense amps, etcs) is often almost equal to the total bitcell area, so the above estimates of die size for the test chip are probably too low.
  • bartoni - Thursday, December 12, 2019 - link

    Ian, Annonomouse is correct. The area efficiency of the highest-density SRAM array is usually only about 75%. Please adjust your 5nm D0 estimates.
  • PixyMisa - Wednesday, December 11, 2019 - link

    30% lower power is a significant improvement - I think TSMC originally only predicted something in the range of 10% to 20%.
  • Hulk - Wednesday, December 11, 2019 - link

    How come TSMC was able to blow through 14nm, 10nm, 7nm, and is now getting good yields at 5nm while the (old) process industry leader Intel has basically been stuck at 14nm for the past 5 years?
  • Kabm - Wednesday, December 11, 2019 - link

    That is how market value of TSMC is bigger than Intel now!
  • twotwotwo - Wednesday, December 11, 2019 - link

    Man, I would *really* love to know more about this. (Lot of it probably never sees the light of day, though.) It seems like for this duration of Intel's problems it has to go up to the upper execs. Six months or even a year would be one thing, but this is the kind of timeframe where you have to think there were chances to change approach that weren't taken.
  • Yojimbo - Wednesday, December 11, 2019 - link

    Intel bit off more than they could chew with their 10 nm design. Intel's 10 nm is best compared with TSMC's 7 nm as far as characteristics are concerned. TSMC also has had trouble with their 7 nm. It would probably be unusable for Intel. TSMC has relatively low volume on their 7 nm, I believe, as far as high powered chips are concerned. Intel needs massive volume for large, relatively high-powered chips. Their 10 nm that looks like it's limping along is probably in better shape than TSMC's 7nm, it's just that they have a different production demand in terms of product type and volume. TSMC makes mostly mobile SOCs on their 7 nm node. The volume of AMD's chips is low compared to Intel's needs, and looking at AMD's margins they are having some troubles with the node. TSMC also introduced their 7 nm later than Intel 10 nm, so if you compare Intel's experience with 10 during the period after TSMC came out with their 7 with TSMC's experience with 7, the comparison looks a bit better, as well.
  • Fataliity - Wednesday, December 11, 2019 - link

    Man you tried really hard there. To skew in Intel's favor. Why are you defending their failure? They chose their goals and so did TSMC. TSMC succeeded, Intel didn't. And I forget but I think TSMC's revenue on 7nm was somewhere around 34% of Total Revenue, which isn't low volume. It is their highest revenue node, beating 28nm too. It's nothing to do with volume.

    Also TSMC has much closer relationship to ASML. They first created HVM EUV with ASML engineers in THEIR lab. In 2013 I think. So of course they have more experience with EUV and DUV ASML machines, they are basically co-developing the technology. They also own a majority of patents on techniques.

    The main reason is die size. Obviously AMD knew when switching to 7nm that big chips would not be sustainable, and opted for an MCM approach due to yields and complexities of quad-patterning. Which happened to be a very smart business move because below 7nm the yields get worse, so the smaller dies which benefit from the node is the CPU cores so the chiplet architecture was in hindsight a godsend. Whereas the IO is also most prone to failure from defects, was kept on a larger node with almost perfect yields.

    You don't get to just change the timetables to help Intel look better. They were too ambitious, and failed. No matter how you spin it. It was their fault. I'm sure their masks play a key role in the failure too. The masks are very very important on DUV quad patterning.
  • Fataliity - Wednesday, December 11, 2019 - link

    And to add to that, TSMC's revenue is strictly from creating chips for other people. Intel enjoys 70%? gross margin? or around there, while their Revenue's are very close recently if i remember correctly So obviously TSMC is producing many more chips (maybe not all on the leading node) versus Intel.
  • lightningz71 - Thursday, December 12, 2019 - link

    To add additional information, Intel also decided that, with their 10nm node, they wanted to make a material change in their gate and substrate formulation, and that proved to be harder to work with than they predicted. Basically, Intel made a bet that EUV would be more delayed than it is, and tried a different approach that would buy them two nodes of shrinkage without requiring EUV based patterning and masking. That bet failed spectacularly. EUV was ready for volume production earlier than Intel expected (though, much later than early industry targets) AND they couldn't get 10nm to behave with their material choices to the level that volume production requires.

    Compounding problems for Intel further, they HEAVILY optimize each core/chip layout for the targeted process node. This means that if there is a problem with a node, EVERYTHING gets delayed, and that porting the design to an older node requires a LOT more work and takes MUCH longer than it otherwise would. It also balloons the die size on those older nodes. Newer designs usually include a LOT more transistors than older ones. On older nodes, those transistors are easily half again larger, so they either have to use much larger dies, which reduces wafer yield, or they have to use far fewer core counts, resulting lower tier products that command a lower price on the market, reducing revenue per die. Neither is a good choice and both result in reduced revenues.
  • thestryker - Wednesday, December 11, 2019 - link

    One thing to note about ASML is that Intel wrote off the whole first generation or two of EUV because it wasn't powerful enough for their needs. That could very well bite them in the ass long term, but they clearly have/had a plan which led to that choice.

    As far as Intel is concerned it was 100% being overly ambitious and not predicting new architecture + node problems. I do agree this is on them, but with luck it has been a lesson they learn from.

    AMD was forced to go dramatically new due to revenue which is the real reason we saw MCM designs. It has also not been without its problems, but it seems like the best approach to huge core counts (I consider that 16c+).
  • Teckk - Wednesday, December 11, 2019 - link

    Has Intel published anything like this for their 10++ or 7nm?
  • ksec - Thursday, December 12, 2019 - link

    No, Intel does not Fab for others ( at least not any more ) and does not require or have interest to disclose any of these.

    Compared to TSMC being a Pure Play Foundry.
  • Yojimbo - Wednesday, December 11, 2019 - link

    So their N7 to N5 is like a half node shrink as far as power and performance characteristics.
  • Teckk - Thursday, December 12, 2019 - link

    N5 is a full node. Also, it says 30% reduction in power among other thing, which is pretty significant. Yet to see numbers from Intel, if their next (or current 10nm HVM) node after 14.
  • Yojimbo - Saturday, December 14, 2019 - link

    I said "like a half node in terms of power and performance characteristics". It's 15 percent performance OR 30 percent power improvement. Over a full-node 2 year time frame that's pretty terrible. Going from 20 nm to 16 nm, TSMC claimed 50% performance or 60% power improvement. 10 is an intermediate node between 16 and 7 and from 16 to 10 they claim 15% performance or 35% power and from 10 to 7 they claim 20% performance or 40% power, both better than the 7 to 5 claim. 5 seems a lot more like an intermediate node, performance-wise, only the time period from 7 to 3 is stretching out to 4 years from the 3 years it took to go from 16 to 7. Because it's getting stretched out so much and 3 nm won't be ready, they are adding N5P to what in recent history would have been a short term node used mostly for SOCs.
  • Fataliity - Wednesday, December 11, 2019 - link

    When you talk about the "exclusive rights" for specific transistor, you reffering to something similar to what Nvidia has on 14nm with their exclusive finfets?
  • Sychonut - Wednesday, December 11, 2019 - link

    Looking forward to 14+++++++ to combat that. Intel's 130 picometer is denser than competition's 5nm.
  • Vitor - Wednesday, December 11, 2019 - link

    TSMC: A

    Intel: Stop showing off already.
  • Adonisds - Thursday, December 12, 2019 - link

    Thanks Ian!
  • Fulljack - Thursday, December 12, 2019 - link

    will Zen 3 use 5nn EUV? I doubt so, it'll probably already reserved by Apple for their next chip, A14. still, looking at TSMC track, they're off to become one of the leading player in the industry
  • AnGe85 - Thursday, December 12, 2019 - link

    Zen3 will use "7nm+", so it could be the N7P or N7+ (with four EUV layers), most likely the latter. Accoding to news, AMD, Apple and HiSilicon (Huawai) have already signed contracts for 5 nm capacity. (Apples A12 used N7, the current A13 uses N7P and therefore a possbile A14 will probably use the N5 in 2020.)
  • Kevin G - Thursday, December 12, 2019 - link

    As with everything bleeding edge, it wouldn't surprise me if Apple has a N7+ fall back plan for their next SoC. It appears that Apple has been running their designs across multiple foundry tools for this purpose and running their designs through multiple node tools would be an extension of what they're already doing. It is expensive but Apple has the resources and funding to ensure that they never miss a beat when it comes to new designs.
  • Adored - Thursday, December 12, 2019 - link

    "TSMC is stating that their 5nm EUV process affords an overall with a ~1.84x logic density increase, a 15% power gain, or a 30% power reduction."

    15% power gain should be 15% performance gain. Cheers Ian.
  • guycoder - Thursday, December 12, 2019 - link

    Is the relationship between performance and power gain linear? I do know as CPU's are overclocked they user greater amounts of power for diminishing returns in performance. A very interesting and informative article and great to see a peak ahead from TSMC. Ian has been on fire with the quality and information in his reporting. Can't wait until he does another video blog with Wendell though :)
  • ijdat - Thursday, December 12, 2019 - link

    Also N5+ offers another 7% performance gain or 15% power reduction from N5. N6 is more likely to be the big-volume follow-on from N7 than N7+ because the layout rules are compatible, N7+ needs relayout (N7 circuits can be used as-is in N6 but not in N7+). Both give smaller chip size than N7.
  • ksec - Thursday, December 12, 2019 - link

    > Recent reports state that ASML is behind in shipping its 2019 orders, and plans to build another 25-27 in 2020 with demand for at least 50 machines.

    Well ASML were supposed to speed up their 2019 production which turns out to be roughly the same as 2018. And why speed it up when you are the only player in town?
  • tygrus - Thursday, December 12, 2019 - link

    This will take 10 to 12 months before we see commercially viable output being delivered to clients. Smaller dies for MCM, redundant SRAM, binning of cores & speed help increase yields of sellable chips. The yields are still low and will take several rounds of test & adjustment to improve yields, improve characteristics and adjust design rules for clients. It used to take 6 weeks for a raw silicon wafer to go through the production line before it could be fully tested. Make a change and it takes another 6 weeks ... how many times must this be done to improve yields?
  • ijdat - Monday, December 16, 2019 - link

    6 weeks? [noise of me falling off chair laughing]

    You're out by roughly 3x for 7nm :-(

    But that's not how process yield optimisation works, it's way too slow. While bringing up a process a foundry will just keep pushing lots in several times a week trying many minor tweaks, doing analysis and inspection part-way through processing where possible, and feeding data back as it becomes available. Then this is repeated when they start pushing large-volume product through, because nothing helps get yield up like running many lots through the fab -- I've seen TSMC 7nm yield curves for "Product 1" and "Product 2" stretching over more than 6 months with data points every couple of weeks.

    I've also seen yield curves for 5nm showing yield is improving at least as fast as 7nm did vs. time, which is not unexpected because TSMC used 7FF+ to pipeclean EUV.

    This "baby steps" method to new processes is why TSMC have been succeeding at doing this and Intel haven't, they tried to take a massive leap for 10nm and crashed. It's also why I'm not convinced about their fantastic plans for 7nm and beyond, because they still haven't even got 10nm to a state where it can yield big high-core-count server chips. In comparison, I believe TSMC are already shipping 7nm chips which max out the reticle size (~800mm2?).
  • Kill16by9TN - Thursday, December 12, 2019 - link

    ".... million transistors per square millimeter (mTr/mm2)..." Ian, please fix those fractional milli-transistors, since the ISO prefix to indicate the intended million-multiplier is 'M'. ;-)
  • buxe2quec - Friday, December 13, 2019 - link

    Could someone explain or post links/keywords to learn more about "dense/high performance libraries"?
  • Duncan Macdonald - Friday, December 13, 2019 - link

    High performance libraries are cell designs optimized for highest performance at the expense of silicon area and (often) power consumption. Typically used for things like the CPU core and level 1 cache.
    Dense libraries are cell designs optimized for lowest silicon area (and often for power consumption) at the expense of speed. Typically used for things like level 2 or 3 cache.
    (The level 1 cache is normally far smaller than the level 2 or 3 caches and has far more activity so speed is more important.)
  • AnGe85 - Friday, December 13, 2019 - link

    For starters ;-)
    https://www.anandtech.com/show/13405/intel-10nm-ca...
  • chrysrobyn - Friday, December 13, 2019 - link

    The discussion of defects and chips per wafer absolutely misses the point that defects which land in SRAM arrays are often repairable. The repaired yield in this area could easily double the native yield (40% -> 80%).
  • dew111 - Friday, December 13, 2019 - link

    Oh hey, I worked on that oscilloscope lol
  • HollyDOL - Friday, December 13, 2019 - link

    I quite don't understand that Fig 12. Shmoo plot. Let's say the GPU one... on 1.1GHz you are good if you can run it with 0.9 - 1.2V and failure if you can run it on 0.7V?
    I would assume it should be rather opposite (ie. the higher frequency you can sustain on specific voltage, the better). What am I missing?
  • name99 - Saturday, December 14, 2019 - link

    What it means is that if you try to run the device at .7V it WILL NOT operate successfully at 1.1GHz.
    It will run without faults at maybe .75GHz, but as you crank up the clock above that, the device will start to fail in some way.
  • HollyDOL - Monday, December 16, 2019 - link

    Thanks for explanation, appreciated.
  • kareyholland@gmail.com - Friday, February 7, 2020 - link

    I checked previous comments & don't think this subject was covered. Can we get insight into the TSMC N5P - I think this is the "high mobility channel" improvement? Are they improving the strain/stress on the Si fins, or are they replacing the Si Fins w/ Ge, SiGe or something else? I know this has been proposed for many years now, but I have not seen anyone move forward with fin replacement. Thoughts?
  • AndrewIntel - Sunday, July 10, 2022 - link

    This solution is based on a 12 inch wafers and in the future the industry will move to 18 inch wafers, which means higher utilization of the fab and better pricing per wafer and eventually better prices to the end user. Basically, much more dies per wafer. This die per wafer calculator show the various options per wafer size: https://anysilicon.com/die-per-wafer-formula-free-...
  • AndrewIntel - Sunday, July 10, 2022 - link

    This solution is based on a 12 inch wafers and in the future the industry will move to 18 inch wafers, which means higher utilization of the fab and better pricing per wafer and eventually better prices to the end user. Basically, much more dies per wafer. This die per wafer calculator show the various options per wafer size: https://anysilicon.com/die-per-wafer-formula-free-...

Log in

Don't have an account? Sign up now