Monday, August 22, 2016

Two days to go until AMD's Hot Chips presentation - how about Zen's core size?

TL;DR: Zen's many integer schedulers might be related to dedicated dependency chain handling. And the Zen core might be just 4.9 mm² incl. L2 cache.

Here are some last speculative thoughts before we'll hear about Zen at Hot Chips 28, from the guy, who told you first about Bulldozer's and Zen's microarchitectures, AMD's upcoming 32 core server chip, and some other interesting things. Now I can say this, as AMD did present a first view on Zen's microarchitecture just a day after my last blog posting. Again, I was pretty close. This simply depends on the amount of data found in patches and patents.

However, some things are different: The speculated Zen microarchitecture shown here on this blog had a unified scheduler for the 4 ALUs and a second one for the AGUs, while Zen actually has 6 separate schedulers instead. The base for my assumption was the cat core heritage. But of course, a unified scheduler for 4 execution units holding lots of µOps is still a big step up from a scheduler, which only has to issue to two units (cat cores). Now what's the reason for this configuration? At first, there are many possible typical design trade-off related reasons, like area, delays, buffer sizes, power efficiency. But there are also some interesting concepts, like a dependency chain oriented handling of instruction streams. If some code has an instruction level parallelism greater than one, there are groups of instructions, which at some point could be executed independently of the main flow, until their result gets fed back. These groups and sections of the main flow could also be called dependency chains, where it is not possible to execute a newer instruction before an older one, as each of them in the chain depends on some result of its predecessing operation. Here is an example:
mov eax, [edi]
add eax, ecx
imul edi, eax
cmp [ebp+08h], edi
jnz out
This code actually can't be executed out-of-order. All the logic put into an out-of-order scheduler would be a waste of energy in this case. And multiple same type execution units for parallel issue wouldn't help to speed this up either. A single scheduler with an integer execution unit (IEU), and an address generation unit (AGU) would be enough. The latter wouldn't even need a separate scheduler, similar to the K7, K8, K10 series of CPUs. This could be one reason for Zen's individual schedulers, as one identified dependency chain could be sent to a single integer scheduler and one AGU scheduler if there are memory operands. The other schedulers might even be clock gated then.

U.S. Patent No. 8769539 covers a scheduler, which can be switched between out-of-order and in-order operation. One of the inventors is Zen project leader Suzanne Plummer, while Dan Hopper is also an important member of the Zen x86 core design team. In combination with many other patents (for example  US20120023314), which cover dependency chain related logic, there might be such a scheduler in Zen.

Knowing the dependency chain also offers several efficiency measures. One patent covered different latencies for "far source operands" and close ones, i.e. coming from a different "lane" or the same. Bypass networks could be implemented in a somewhat more relaxed way, which improves delay and power efficiency.

A Zen core size estimation

After talking about Zen's die size just days ago, there is another size, which likely will be revealed at Hot Chips: the size of a Zen core. Earlier this year I created a table to estimate that size based on Excavator module components and some scaling factors. Based on AMD's statement about a density optimized process, one of the unknowns in this calculation just became a bit smaller. Their statement could both mean dense metal layers or high density standard cell libs. For simplicity and lack of further data, assuming no density related scaling should do. Process related scaling is a different story, though.

Using die photos, it is possible to measure the size of a graphics CU. On a Polaris 10 die, the size of a graphics CU is about 2.96 mm², while Carrizo has 7.21 mm² CUs. This results in a scaling factor of about 41%. Putting this all together with some individual scaling factors based on design changes (e.g. more ALUs, smaller multiplier arrays, 64KB L1 I$ - already included in the "area 1C" number), results in a Zen core size incl. L2 cache of about 4.9 mm².

Zen core size estimation based on Excavator data

Wednesday, August 17, 2016

Some last chance pre Hot Chips speculation about Zen

TL;DR:  I made a new (stitched) Zeppelin die photo. AMD's datacenter APU might use multiple Zeppelin and Greenland dies. Zen's FPU might have some interesting and unique capabilities.

There is less than one week left until AMD's Zen presentation at Hot Chips. A redditor set up this nice countdown. As usual, AMD will not talk about final SKUs and clock frequencies. But they surely will give more details about individual microarchitectural features. While this would be the first chance to verify what has been posted already ten or five months ago on this blog, it will surely reduce the amount of features to be speculated about. In other words: this is a last chance to post some yet unpublished thoughts about the microarchitecture.


For a start, you get this full Zen/Zeppelin die shot, created from multiple patches of the already known photo showing a part of a Zeppelin wafer:

Labelled Zeppelin die photo (stitched)
Due to some missing reference, it is difficult to find the correct aspect ratio of the die. Scaled the way as shown above, it looks roughly "right" and also matches Hans de Vries' corrected image.

In the past I estimated the die size to be about roughly 160 mm² based on what's in the core, and how other components might scale. When matching this die shot's DDR PHY to that of Skylake, I get roughly 200 mm² (assuming a good guess of the aspect ratio and roughly similar DDR PHY area). So I wouldn't be surprised, if Zeppelin is somewhere in this range 160-200 mm².

GMI-Links and the datacenter APU

AMD's GMI links (Global Memory Interconnect) are already known since Fudzilla mentioned them here. Soon afterwards they posted a slide, which likely shows a schematic view of AMD's planned datacenter APU. This slide was the base for creating the picture below. There I noticed the placement of the orange lines in the center of the Zeppelin and Greenland dies. As "Data Fabric" is written in the same color, the horizontal lines likely mean the same.

So what does this tell us? Well, it looks like both the CPU part and the GPU part do consist of two dies each, which are also connected via GMI. If you already heard about the distributed memory controller in Zen based processors (mentioned in combination with a directory based coherency protocol on LinkedIn), all this makes sense. Knowing the leaked die photo, as shown above, it is not wrong to assume, that the two GMI link structures (GMI-Link #0 and #1) actually comprise of four links. This would be enough to connect two Zeppelin dies with two GMI links to get access to the two distributed memory controllers (and two DDR4 channels provided by them) on the other die. Two more links provided by each die go to the two Greenland dies, which in turn might also have four GMI links each. Each of the GPU dies might just have one HBM PHY. Of course, the shown Greenland GPU might just be a monolithic die sitting on the interposer. But while we are at it, an interposer would be a perfect way to stitch multiple dies together - something, that is expected to come to a greater extent with Navi.

This would provide a lot of flexibility in configuring different processors from a small set of dies: one 8C Zeppelin die and probably just one Greenland die. One important reason for this would be costs for different designs, which are growing with each new process node.


Floating Point Unit

One of the more interesting parts of the Zen microarchitecture is the four-wide FPU. As the GCC patch suggests (by decoding type "single" or "double"), the FPU's native width is 128 bit for SIMD operations. A different patch mentioned a 3 cycle latency for cache accesses by the FPU. With a base L1D$ latency of 4 cycles indicated by the patches, this would mean a total of 7 cycles latency for FP memory accesses. This is likely the cost for going through the FX unit ("fixed point"), which contains the load store unit responsible for L1 data cache accesses. I won't go through the full details of the patches regarding all the different instructions. Let me point you to a wonderful CPU chart found at InstLatX64, which also includes Zen resp. Zeppelin, and Looncraz' instruction mapping table.

But two things stood out about the MUL instructions:
  • In early patches, FMA was displayed as using a combination of a FMUL pipeline (fp0/fp1) and the second FADD pipeline (fp3). This led me to the assumption, that we might see an incarnation of the bridged FMA here. Later this patch info has been changed, so that FMA instructions would run through the FMUL pipelines only
  • (SSE) FMUL and SSE IMUL instructions seem to occupy specific stages in the corresponding pipelines for more than one cycle. This means, throughput would be lower than 1 per pipeline. In the GCC patches this is specified as a times symbol, for example "fp0*3". However, this is not the case for FMA instructions, which could be related to a special treatment (due to the bridge), which  might skip some to-be-iterated stage. We might learn a bit more about that next week.
One reason for that might be Zen's cat core heritage. To be more power efficient, cat cores use a "rectangular" or "iterative" multiplier. This means, it is reduced in depth, so that it can do 32bit FP multiplications at max throughput, but becomes slower, the wider the FP numbers are (64bit and 80bit). This is caused by the need to do multiple iterations in the multiplier array to produce all needed partial products, which become more, the wider the numbers are. This saves a lot of power and area, while still maintaining full throughput for a lot of FP/SIMD code (incl. games), which uses single precision. Also often used double precision has a lower throughput, which doesn't cause much of a performance hit with cat cores, as a lot of code even has more FADDs than FMULs. It typically costs a few percent with DP code for a nice power efficiency improvement.

Another aspect is, that in case of a bridged FMA (or something similar, see below), the FPU wouldn't need that many FPRF read ports, as during doing a FMA operation, a first unit (FMUL) reads the two multiplicands of an FMA instruction, while a second unit (FADD) reads the addend with its own ports and finishes the FMA operation by doing the addition, normalization, and rounding. I think it is interesting to note, that several cat core related patents covering a FMA unit did show a delayed read of the addend.

Since Zen will come with a lot of cores (especially the datacenter variants with up to 32 cores), AMD's (or Jim Kellers?) choice might have been to cut the per core power consumption for higher total core counts. Instead of making the whole core weaker, they seemingly decided to avoid hardware support for wide SIMD. This way, there is no need for 256b datapaths from L1D$ to the execution units, 256b wide registers, and of course 256b wide execution units. This already saves some power, as can be derived from the following chart, taken from the paper "Improving the Energy Efficiency of Big Cores" (PDF):

Similar to SIMD execution width, multipliers are still contributing a large part to a FPU's power consumption at full throughput, so AMD might have cut that further to use (updated) iterative multipliers as found in the cat cores. Maybe this is the reason, that there are two FMUL units in a single core at all, as the construction core line has shown, that AMD avoided to have that many FMUL/FMA units in a single core. There are other nice effects, like a reduction in voltage droops, which were the next big thing in Steamroller, and are still being handled by Sam Naffziger's "Voltage Droop Mitigation" in Carrizo, Bristol Ridge, and even Polaris. An AMD paper described, how researchers were able to increase the base clock frequency of an Orochi processor by 400MHz and higher simply by reducing the throughput of heavy FP ops like FMUL. A FMUL implementation with an iterative multiplier would have a similar effect already built in.

But that's not all. AMD Research lists a paper called "REEL: reducing effective execution latency of floating point operations", which (for many at least as abstract) can be found here. In this paper, researchers describe a novel FPU, which contains some additional registers located in one pipeline stage before round and normalization for later reuse. With a modified scheduler, it is possible, to significantly reduce the effective execution latency of a chain of dependent instructions. This happens by forwarding intermediate results from the internal micro register file. One important aspect of floating point performance is execution latency, as many calculations found in typical code have a low ILP (instruction level parallelism). In these cases, reducing these latencies is important. You may compare some of Zen's latencies in the CPU chart mentioned above. But on top of that, a FPU like described in REEL, would help even further. FMA hasn't explicitly been discussed in the paper, but the way, how the FPU works, there is even kind of an inherent FMA execution (kind of fusion) for dependent FMUL/FADD instructions.

Remaining things


What else could be shown at Hot Chips for Zen? Based on presentations and patents, I wouldn't be surprised about:
  • a package level integrated voltage regulator, or maybe even a FIVR
  • per thread priorities for a more efficient SMT implementation in cases of differently prioritized threads (e.g. Prime95 in background and a game in foreground)
  • finally a working ASF/transactional memory implementation, which becomes increasingly important for higher core counts
  • more efficient address handling (esp. of stack addresses for accessing a stack cache) to reduce AGU usage
  • dual front ends in the future for an increasingly powerful execution back end
  • future application of die stacking (2.5D, 3D) and PIM
  • interesting network on chip topologies for 16 and 32 cores like "ButterDonut"
  • uOp and stack caches
  • reduced branch misprediction penalty thanks to checkpointing or fast rollback of executed instructions
  • FMUL/FADD fusion (to handle them via FMA operations)
This is, what I wanted to get out before enjoying this year's Hot Chips' Zen revelations!

The next article on this blog will be about Zen clock frequency and performance projections, as more information became available. BTW, have you seen Looncraz' Zen analysis at the end of his XV article yet?

Monday, August 8, 2016

Some AMD Zen leaks (ES clocks, PCI info, Rambus DDR PHY, Roadmaps) [UPDATED]

Update: Small corrections, comments on PCIe dump, AoTS leaks, link to more slides added.
TL;DR: First Zen OPN and PCIe info leaked. Additionally I do a recap of some other recent leaks.

Planet3DNow! forum user "Crashtest" posted a Zen ES OPN and PCI device info (behind the spoiler button), which somehow landed in the CPU-Z and SIV databases.

So the OPN is "1D2801A2M88E4_32/28_N". Using the already known schema, "D" stands for desktop, "28" for the base clock frequency (2.8GHz), the first "8" of "88" for the number of cores. And more clearly, "32/28" stand for turbo boost frequency (3.2GHz), and the base clock again (2.8GHz). This matches the information given for the 8C DT ES in an AnandTech forum posting recently:
"The most exciting part is core clock. The 8c/95W variant's base clock is 2.8GHz, all core boost is 3.05GHz and maximum boost is 3.2GHz."
This hasn't been confirmed in this way before (except that I heard it is true). So until now there was nothing available, that actually supported the information posted there.

The PCI information found in the SIV database looks as follows:
Bus-Numb-Fun IRQ Vendor-Dev-Sub_OEM-Rev Class (9:255) Vendor and Device Description Showing 39 of 39
[0 - 00 - 0] 1022-1450-14501022-00 Host Bridge AMD
[0 - 01 - 0] 1022-1452-00000000-00 Host Bridge AMD
[0 - 01 - 2] 1022-1453-00000000-00 PCI Bridge (0-1) x4 (x4) AMD
[0 - 02 - 0] 1022-1452-00000000-00 Host Bridge AMD
[0 - 03 - 0] 1022-1452-00000000-00 Host Bridge AMD
[0 - 04 - 0] 1022-1452-00000000-00 Host Bridge AMD
[0 - 07 - 0] 1022-1452-00000000-00 Host Bridge AMD
[0 - 07 - 1] 1022-1454-00000000-00 PCI Bridge (0- x16 (x16) AMD
[0 - 08 - 0] 1022-1452-00000000-00 Host Bridge AMD
[0 - 08 - 1] 1022-1454-00000000-00 PCI Bridge (0-9) x16 (x16) AMD
[0 - 20 - 0] 1022-790B-790B1022-59 SMBus Controller AMD
[0 - 20 - 3] 1022-790E-790E1022-51 ISA Bridge AMD
[0 - 20 - 6] 1022-7906-79061022-51 SD Host DMA Controller AMD
[0 - 24 - 0] 1022-1460-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Link Control
[0 - 24 - 1] 1022-1461-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Address Map Configuration
[0 - 24 - 2] 1022-1462-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor DRAM Controll
[0 - 24 - 3] 1022-1463-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Miscellaneous Control
[0 - 24 - 4] 1022-1464-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Link Control
[0 - 24 - 5] 1022-1465-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Function 5 Configuration
[0 - 24 - 6] 1022-1466-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Function 6 Configuration
[0 - 24 - 7] 1022-1467-00000000-00 Host Bridge AMD Summit Ridge (K17) Processor Function 7 Configuration
[1 - 00 - 0] 1022-43B9-11421B21-02 XHCI Controller x4 (x4) AMD Promotory USB 3.1 XHCI Host Controller
[1 - 00 - 1] 1022-43B5-10621B21-02 SATA (AHCI 1.0) x4 (x4) AMD
[1 - 00 - 2] 1022-43B0-00000000-02 PCI Bridge (1-2) x4 (x4) AMD
[2 - 00 - 0] 1022-43B4-00000000-02 PCI Bridge (2-3) x1 (x1) AMD
[2 - 01 - 0] 1022-43B4-00000000-02 PCI Bridge (2-4) x1 (x1) AMD
[2 - 02 - 0] 1022-43B4-00000000-02 PCI Bridge (2-5) x1 (x1) AMD
[2 - 03 - 0] 1022-43B4-00000000-02 PCI Bridge (2-6) x1 (x1) AMD
[2 - 04 - 0] 1022-43B4-00000000-02 PCI Bridge (2-7) x0 (x4) AMD
[3 - 00 - 0] 14E4-1687-168714E4-10 Ethernet Controller x1 (x1)Broadcom NetXtreme BCM5762 Gigabit Ethernet PCIe
[3 - 00 - 1] 14E4-1640-164014E4-10 SD Host DMA Controller x1 (x1)Broadcom
[5 - 00 - 0] 1002-68F9-010E1002-00 VGA Controller x1 (x16) AMD Cedar Pro [Radeon HD 5450/Radeon HD 6350] [GPU-0]
[5 - 00 - 1] 1002-AA68-AA681002-00 High Def Audio x1 (x16) AMD Cedar/Park HDMI Audio
[8 - 00 - 0] 1022-145A-145A1022-00 Other (0x130000) x16 (x16) AMD
[8 - 00 - 2] 1022-1456-14561022-00 Other Encryption x16 (x16) AMD
[8 - 00 - 3] 1022-145C-145C1022-00 XHCI Controller x16 (x16) AMD
[9 - 00 - 0] 1022-1455-14551022-00 Other (0x130000) x16 (x16) AMD
[9 - 00 - 2] 1022-7901-79011022-51 SATA (AHCI 1.0) x16 (x16) AMD
[9 - 00 - 3] 1022-1457-14571022-00 High Def Audio x16 (x16) AMD

Total of 7 PCI buses and 39 PCI devices in 0.040 seconds.
Source: SIV database

Update #1: As "Crashtest" explains in a later posting, the respective Summit Ridge system (w/ Myrtle mainboard) seems to have at least 36 PCIe lanes. According to him, the listed configuration seems to be a bit chaotic. BTW, "Promotory" should actually be written "Promontory".

Update #2: Thanks to the OPN, Planet3DNow! user "BoMbY" identified some Ashes of the Singularity benchmark results, which were run on two different Zen engineering samples. One had the same OPN, while the other had a slightly different one ("2D" instead of "1D"). As they've been removed from the AotS database, you can find them archived here.

Next there was a BitsAndChips article about the likely provider of the DDR4 PHY found on Zen based processors: Rambus. That speculation is based on the given details the author learned from his sources, which fit well to what Rambus recently announced regarding their 3200 Mbps DDR4 PHY available for Globalfoundries' 14LPP process. You can learn a bit more about their technology here.

Another interesting bit of info are two AMD roadmaps, which look real. But as one recently could see with this Athlon X8K die shot posted at Reddit, even the quality of die shot fakes can be rather high (also see my analysis there). In that case the creator of the die shot wrote me, that he actually just tried to check the viability of such a product regarding die size and thus processing costs. Back to the roadmaps.

For 2017 they show:
  • Raven Ridge APUs for the FP5 socket (4C/8T, <=12 gfx CUs, 4-35W TDP)
  • higher TDP models (65-95W) for the AM4 socket (also 4C/8T and an unknown number of gfx CUs)
  • also AM4 based Summit Ridge CPUs with 8C/16T and TDPs of 65 to 95W
Most of the boxes are marked as "AMD PRO", which usually stands for a separate series of products targeted at commercial customers. Due to special certification programmes, perhaps also additional testing, and of course the integration in ready to ship OEM hardware, these products might hit the shelves a bit later than consumer variants.

There is also a notable difference in the listed processes: "14nm SoC" for Raven Ridge, and "14nm FinFET" for Summit Ridge. I assume, that the "14nm SoC" process might refer to a different metal layer density, as recently covered in an article by Hiroshige Goto (Japanese) about APUs and AMD's FinFET efforts. Thus "FinFET" could stand for less dense lower metal layers, which would allow for somewhat higher clock speeds due to lower wire delays.

A note on the slides: I saw some unusual pixel patterns and spacings in the "14nm SoC" and "14nm FinFET" boxes (aside from different sizes). I'd expect a scaled, interpolated PowerPoint slide to show subpixel positioning of single characters. But I saw only pixel exact 1 or 2 pixel spacings with exactly similar interpolated pixels around the characters. I'll try to reproduce this in PPT.

In the end, these slides (if real) might mean, that there is no desktop Zen available in 2016 (even not as the promised sampling at the end of the year). But that is not for sure, yet. My own GCC patch based launch speculation gave a rough launch date range between 10/2016 and 05/2017.

Update #3: You can find the complete two slides and an additional one here. These don't look like being faked, although some details look a bit awkward. But this might be attributed to a smaller target audience (I suppose decision makers with more interests in dates and specs).