Comments Locked

206 Comments

Back to Article

  • Devo2007 - Thursday, November 14, 2019 - link

    First page "As we move into 2019" - should be "As we move into 2020"
  • plp1980 - Thursday, November 14, 2019 - link

    Is says as we move "through" not as we move "into"
  • Ryan Smith - Thursday, November 14, 2019 - link

    It did originally say "into". We've since fixed it.=)
  • Netmsm - Friday, November 15, 2019 - link

    Ryan, why isn't there any Cinebench test?!
  • TheJian - Wednesday, November 20, 2019 - link

    Because nobody uses it as Intel said (nobody is 1% or less right?)? Nobody making money is using something that is far slower that PAID products. Pointless to benchmark this in every review, just like it's pointless to test 4k in all vid card reviews when nobody is using that either (ok, nobody here means less than 2%...LOL). Whatever. Surely Ryan is chomping at the bit now to tell me 4k is the new enthusiast standard...LOL. Yeah, wake me when 1440p is, as it still isn't LONG after you said that at 660ti article. Still not even 5% years later, heck, both added up don't hit 7% last I checked (month ago?). 1080p however? 65% of users of 130mil steam gamers (this is pretty accurate for the world elsewhere no doubt). Should test LOADS of 1080p games, and maybe benchmark 1/2 at 1440p, only 1-2 at 4k if at all (should be done once a year or in a separate review of 4k yearly?). Until people use it, quit wasting time.

    Anandtech (and many others) seem to do a lot of testing that is NOT how a user would do use their PC. Handbrake crap quality etc. Who uses FAST? FASTER? You blind already so blur doesn't matter? Cinebench freeware same story. Intel seems to have a point though they didn't mind before losing massively on all these things they are now whining about.

    https://store.steampowered.com/hwsurvey/Steam-Hard...
    Yep, still right, not even 5% yet for 1440p, 7% accurate total 4k+1440p still...LOL. Keep dreaming ryan ;)
  • Netmsm - Sunday, November 24, 2019 - link

    impertinent words! These are not the answer.
    Everybody who works on editing films knows how helpful the Cinebench tests are in specifying which CPU will be faster.
  • alysdexia - Thursday, November 28, 2019 - link

    shall be swifter
  • peevee - Monday, November 25, 2019 - link

    "Anandtech (and many others) seem to do a lot of testing that is NOT how a user would do use their PC."

    Absolutely. And all their tests get the same amount of space. Including those nobody can use or reproduce. BS all the time, like off-screen rendering, compute on Dolphin emulator, in-house 3DPM... Ancient codecs, irrelevant settings... Somebody needs to bring them back into reality.
  • Ian Cutress - Tuesday, March 24, 2020 - link

    There is. It's in our benchmark database. www.anandtech.com/bench
  • asking - Sunday, July 5, 2020 - link

    @Ian Cutress there is significant doubt (in the form of harrasement) being expressed on the forums about your conclusion on page two of this article that the power consumption of Ryzen chips changed (went upward) between Zen+ and Zen 2. Would be interesting to see your further thoughts: https://forums.anandtech.com/threads/will-cpu-supp...
  • Netmsm - Saturday, November 16, 2019 - link

    Also, in section "x264 HD 3.0: Older Transcode Test" the result of "3DPM v1 Multi-Threaded" is mistakenly placed instead of "x264 HD 3.0 Pass 2".
  • The_Assimilator - Thursday, November 14, 2019 - link

    "I’m sure some people will disagree about those 50 MHz"

    We call those people "whiny bitches who should STFU".
  • nathanddrews - Thursday, November 14, 2019 - link

    In a world of such precision and technical pedants, you have to admit that it is false advertising to say 4.7GHz, when it is 50MHz shy. Rounded up, it's OK, but it's only 1% shady.

    For my use case, this sentence nails it perfectly: "the Core i9-9900KS is still running at 5.0 GHz for sustained single threaded work, which is still 7-15% higher than the Ryzen 3950X, and as a result it does pull out ahead in a number of ST tests as well as in low resolution (CPU-bound) gaming". Most of the games I play are not current-gen visual spectacles, but rather twitch and competitive games that are a few years old. My priority is the highest possible frame rates for high refresh gaming. I'm not sure that I do enough video editing to justify Ryzen, as tempting as the rest of the package is.
  • Cooe - Thursday, November 14, 2019 - link

    Most every other review I've seen has it hitting the full 4.7GHz, with many even going beyond into the 4.75GHz range when adequate cooling is used. The silicon binning quality of the 3950X seems to be absolutely freaking insane. Meethinks this -50MHz deficit is unique to something specific to Ian's setup here.
  • RSAUser - Thursday, November 14, 2019 - link

    Gamersnexus also seems to have gotten a bit of a dud. LTT seems to have gotten a good one.
  • Cooe - Thursday, November 14, 2019 - link

    Not just Linus, most people have gotten "good ones". I can count the number reviews with chips that didn't reach the advertised 4.7GHz on one hand & have fingers left over to spare (and if I include all those within 50ishMHz or so, like Ian's here, it drops to just one).
  • zmatt - Thursday, November 14, 2019 - link

    Every cpu I have ever owned has always been a percent or so off the advertised frequency either above or below. The number on the box is really just an average and always has been.
  • uefi - Thursday, November 14, 2019 - link

    Don't forget, Intel has their share of the occasional performance shaving microcode patches every year or so.
  • eek2121 - Thursday, November 14, 2019 - link

    I walked away with a very different picture. Right now Anandtech is clearly GPU bound in the benchmarks. They are benchmarking on a GTX 1080, and the results clearly reflect that. Having run some of these games on a 1080ti on my stock 1950X, I get a better result. They really need a 2080ti or 2080 super at this point.
  • plonk420 - Thursday, November 14, 2019 - link

    they can't really use a 1080Ti or better with GTA5... check out GN's coverage: if you hit over ~180fps, you hit a cap that results in insane stuttering (same with RDR2 and 144hz or so)
  • zmatt - Thursday, November 14, 2019 - link

    Maybe they should stop using such an old and buggy game as a bench mark then since its trivial to hit more than acceptable frames with modern hardware.
  • imaheadcase - Thursday, November 14, 2019 - link

    Um they are using 1080p, its not GPU bound be any stretch of the imagination on the games they tested. lol
  • Spunjji - Thursday, November 14, 2019 - link

    Boost frequency isn't guaranteed, so it's not false advertising any more than Intel's TDP.

    If the games you're playing are a few years old and your GPU is sufficient to make the CPU the limit, you'll likely see no real-world benefit from anything above the Ryzen 3600X / Core i5 9600. Discussion of high-end CPUs rapidly becomes moot, even for that niche-within-a-niche.
  • shaolin81 - Thursday, November 14, 2019 - link

    Problem isn't the chip, but the fact they tested it with High Performance profile, which nearly never parks idle cores and therefore there's nearly no change to hit max Turbo on single core. If they try Balanced profile, the would reach 4.7 or 4.75 more easily.
    I'm wondering how Anandtech doesn't know this.
  • GlossGhost - Tuesday, November 19, 2019 - link

    Quite a lot of people don't know this. Maybe we, who know, are stupid instead?
  • Targon - Thursday, November 14, 2019 - link

    One thing to consider is motherboard and the VRMs on the motherboard, power supply, and even RAM. There was an issue causing slightly lower CPU performance prior to AGESA 1.0.0.3ABBA, and 1.0.0.4 is supposed to help a bit as well.

    You also don't take into account that the IPC of the new Ryzen chips is actually a bit better than what you see with the i9-9900k and ks. It may not be enough to offset the clock speed difference, but it does come into play.

    Now, the other thing that will come into play is the security problems that keep showing up. Security mitigations are put into place, and Intel chips get another slowdown. In another year or two, Ryzen third generation vs. the 9900ks might actually be completely even with zero advantage to Intel. You may not be worried about security, but if an OS update puts in the security patches, your performance WILL drop.

    We only have another 7 months until the next generation Ryzen chips come out with another 8 percent IPC improvements, will Intel still be on 14nm with a 10900k being the high end from Intel without any IPC improvements?
  • zmatt - Thursday, November 14, 2019 - link

    What's the highest refresh rate you can get on gaming monitors today? 144hz right? When both Intel and AMD's best are exceeding 144fps in these benchmarks is there a real world difference between the two? I don't think so. You can only be as fast as your refresh rate ultimately.
  • brantron - Thursday, November 14, 2019 - link

    There are 240 Hz monitors with FreeSync and/or G-Sync, so you could (hopefully) run games somewhere between there and 144 Hz without stuttering.

    I've only seen comparisons to 60 Hz, though, and that's really going to depend on the game.
  • Winnetou - Thursday, November 14, 2019 - link

    The highest refresh rate is 300hz now, thuogh it's only one monitor now. There are multiple 240hz ones though.

    Even so, your point is actually worthless here. Having a really high framerate does help with responsiveness and input lag. That's why top CS players still play at 720p low, on 600 FPS. For casuals like us it may be worthless, but for them it isn't. And maybe even for us it won't be, just too small of a difference to be quatifiable.
  • RSAUser - Friday, November 15, 2019 - link

    The frame rate that high is to avoid lows.
  • nathanddrews - Friday, November 15, 2019 - link

    That's typically the main problem: minimum frames. Across most benchmarks, Intel can maintain significantly higher minimum frame rates for 144Hz and up. Obviously these metrics are going to be very game-dependent and settings-dependent, but the data are very clear: Intel's higher frequencies give it a significant advantage for minimum frames.
  • treichst - Thursday, November 14, 2019 - link

    165hz monitord have existed for a while now. 240hz monitors exist from a number of vendors.
  • RedGreenBlue - Thursday, November 14, 2019 - link

    The reason this is okay is the same reason a lot, or most, car engines advertise a.b liter engines when in reality it’s about a.(b - 0.05). It’s a generally accepted point of court precedents, or maybe in the laws themselves, that you can’t sue for rounding, unless it’s an unreasonable difference from the reality. So if you want to sue AMD, expect to be laughed out of court.
  • shaolin81 - Friday, November 15, 2019 - link

    Problem most probably relies on using High performance profile. When I use it with my 2700X, idle cores are nearly never parked and therefore there's no room for CPU to get Max Turbo on a single core. When I change it to balanced profile, I most idle cores are parked and I get beyon Max Turbo easily - to 4450Mhz. Anandtech should try this.
  • prophet001 - Friday, November 15, 2019 - link

    I think that's me too nathanddrews. I mostly play WoW and though I haven't seen any actual benchmarks, I'm pretty sure that the 9900KS will outperform the Ryzens. WoW just scales with clock speed past like 4 cores.
  • SRB181 - Friday, November 15, 2019 - link

    I've often wondered what people are looking for when cpu's get to this point. I'm absolutely positive the 9900ks will outperform my current CPU (1950x), but I'm getting 78 fps at 4k60 in WOW set to ultra (10) in the graphics. Not being a jerk, but what exactly do you notice in gameplay with 9900ks that I would notice/see?
  • prophet001 - Monday, November 18, 2019 - link

    I don't have a 9900 I've just seen how WoW behaves with various processors and my own personal experience. I have an old extreme processor x299 chipset with a 1080 GTX. It's highly CPU bound. Even though the processor is 4 core hyper threaded, it won't get past like 45 FPS in Boralus. I know it's not GPU bound because I look at the GPU load in GPU-z and it's like 40% load. I can turn render scale up or down and doesn't really make a difference on my computer.

    That, along with other people's research, leads me to believe that WoW is highly CPU bound but more specifically, core clock speed bound. People that get 5GHz get much more out of the game than people that leave their CPUs at stock clocks.
  • Qasar - Monday, November 18, 2019 - link

    prophet001 there must be something else going on with your system, as i type this i am sitting in Boralus harbor, right above where the ship from SW docks looking out towards the mountain, with the ship from SW coming into dock on the left. i am getting a minimum of 65 FPS as i spin in the spot. im running a Asus stix, 1060 gaming, with a i7 5930K @ 4.2 ghz. @ 1080P in other zones, i have seen as high as the 180s.... cpu and GPU utilization is 30-40% and a solid 25% respectfully.
    maybe there is something else with your system that is causing this ?
  • Qasar - Monday, November 18, 2019 - link

    should also mention, thats with pretty much max on the graphics options.
  • WaltC - Friday, November 15, 2019 - link

    Wrong answer...;) AMD has only ever said "Max single core boost", emphasis on the word "max," which evidently must be translated for the benefit of people mindlessly trying to pick it apart because the meaning of "max" ever eludes them!...;) Really, I've seen all kinds of stupid come out on this one. AMD does not say, "guaranteed single core boost of 4.7GHz" because it's not guaranteed at all--it is the "max" single-core boost obtainable--not the "only" single-core boost the CPU is capable of! Uh--I mean, I'm embarrassed I actually have to explain this, but *any single-core boost clock above the base clock of the cpu* is a *genuine boost of the core* and "max" of course means only the very maximum single core boost clock obtainable at any given time, depending on all of the attendant conditions! So, take a situation in which all the cores boost to 4.5Ghz, 4.6GHz or 4.3Ghz--every single one of them is providing the advertised single-core boost! And yes, people are indeed seeing 4.7GHz *maximums*--but not all of the time, of course, since "max" doesn't mean "all of the time every time," does it,? In their zeal to defend an otherwise indefendable Intel--people have completely butchered the "max" single core boost concept (well, Intel people have butchered it, I should say..;)). Gee--if the only boost these CPUs ever did was 4.7GHz, then none of them would be "max," would they--they'd be the *normal clock* and the damn CPU would be running at 4.7GHz continuously on all cores!...;) I mean, is it possible for people to wax anymore stupid on this subject than this? "Max" absolutely does not mean "all the time every time"--else "max" would have no meaning at all. Jeez--the stupid is strong @ Intel these days...;) Also, it's interesting to note that with fewer cores and a slower clock AMD processes data *faster* than Intel even though the Intel CPU has more cores and a higher clock--so please, the confusion over "max single core boost clocks" from AMD is just plain dumb, imo. It's plain enough--always has been. Multicore CPUs do not exist merely to see to what GHz a *single core* might reach @ maximum! Jeez--we graduated from single-thread thinking long years ago...;) (Or, rather, some of us did.)
  • bigboxes - Friday, November 15, 2019 - link

    derp
  • Alexvrb - Friday, November 15, 2019 - link

    As the review points out, it's also hard to measure burst frequency. The harder you try, the more you skew the result, too. The CPU could very well be hitting 4.7 briefly in variable workloads on the hot core... although maybe other samples hit it more often or for slightly longer periods of time.
  • III-V - Thursday, November 14, 2019 - link

    For real. It's the performance that matters, not some number with zero real world meaning.
  • Marlin1975 - Thursday, November 14, 2019 - link

    Wow thats a lot of CPU for not much when you compare it against the competition and how much others cost.

    I am surprised the dual channel memory does not hold it back more.
  • Foeketijn - Thursday, November 14, 2019 - link

    That's exactly what I was thinking. An incredable feat to score about double compaired with a 3700x, with twice the cores, twice the power envelope but the same memory bandwidth. What are those embedded Epyc chips (3000 series) doing with quad channel DDR4?
  • brantron - Thursday, November 14, 2019 - link

    Zen 1 and Broadwell have higher memory bandwidth than Skylake-X at low thread counts.

    Broadwell D is still updated almost annually High memory bandwidth at low power is apparently somebody's thing.

    https://www.anandtech.com/show/11544/intel-skylake...
  • Silma - Thursday, November 14, 2019 - link

    Based on you geomean chart, it looks like on 7nm, Intel chips would destroy AMD's, and there's a real possibility Intel's 10 nm chips will be competitive in price & superior in performance if Intel prices them to compete.
  • Silma - Thursday, November 14, 2019 - link

    Anyway, congrats to AMD and thanks for heating the competition again.
  • naxeem - Thursday, November 14, 2019 - link

    Intel can't really do much. They have nothing in the pipeline.
  • Teckk - Thursday, November 14, 2019 - link

    Destroy is probably too strong? AMD will be on TSMCs 5 nm plus their new designs so they'll mostly be on par or in the same situation as today.
  • FreckledTrout - Thursday, November 14, 2019 - link

    While I cant argue that Intel 7nm chips will destroy AMD's current chips. However you are talking two generations of process that need to come out for Intel so at best end of 2021 but more likely in 2022. AMD will either be on or just about ready to release chips on 5nm by the time Intel has chips on 7nm so I expect no destroying from either side but instead healthy competition.
  • Oliseo - Thursday, November 14, 2019 - link

    Had to sit down after reading that. A sensible comment on the Internet.
    Faith in humanity restored.
  • abufrejoval - Friday, November 15, 2019 - link

    Was sitting already, but you made me smile :-)
  • brantron - Thursday, November 14, 2019 - link

    Intel's priority #1 for 7nm also may not be a new CPU architecture. GPU comes first. Willow Cove derivatives could very well appear on 14nm, 10nm, and then wait until 7nm+.

    And there could also be a 14nm Double Plus Good process. :p
  • Teckk - Thursday, November 14, 2019 - link

    Not sure if you intended to reply to me 🤔 I'm already of the opinion there's no destroying anyone anytime soon
  • Irata - Thursday, November 14, 2019 - link

    You could argue that prior to Ryzen 3000, Intel was on the better process (14nm ++... vs. GloFo 14nm and 12nm). And they did not exactly destroy Ryzen / Threadripper back then.

    Not saying this would not help them if they were on 7nm right now, but considering the impressive manner in which Intel's engineers have tweaked 14nm, the difference in performance may be smaller than expected.
  • nico_mach - Thursday, November 14, 2019 - link

    … Except everyone insists that Intel's 10nm IS equivalent to everyone else's 7nm. But sure, next process they'll destroy the competition, any year now.
  • GraveNoX - Thursday, November 14, 2019 - link

    Equivalent in what ? Is like saying all diesel cars will have the same performance.
  • Oliseo - Thursday, November 14, 2019 - link

    "Everyone in my imagination insists Intel will destroy AMD. And the voices in my head don't lie"
  • lobz - Thursday, November 14, 2019 - link

    I'm sure they all mean: any decade now :)
  • Spunjji - Thursday, November 14, 2019 - link

    In theory it might be. In practice, they're still only able to make mobile CPUs with 4 cores or less on it.
  • Orange_Swan - Thursday, November 14, 2019 - link

    Nah, they've got at least one 6core/12thread, 15/25w mobile CPU, the Core i7-10710U
  • Retycint - Thursday, November 14, 2019 - link

    That's 14nm. All Intel's 10nm processors so far have been limited to 4 core Ice Lake U processors
  • Smartcom5 - Friday, November 15, 2019 - link

    If, and that's the whole issue here since a while now, IF Intel's 10nm would be working after all.Sure enough, that's a requirement which still needs to be fulfilled yet. The best process – no matter how oh so advanced it is going to be (on paper) – is worth exactly no·thing if it ain't working. Then, even a (on paper inferior) node is superior, since it at least meets a crucial condition; It's working (already).

    Thus, it isn't anymore. They relaxed it quite a bit in 2017 to make it work, that's it.

    Intel's actual 10nm which spawned Ice Lake isn't the same as it was before, that's why it's coined 10nm+. It's actually less dense than Intel's initial and original-10nm which brought Cannonlake – density was toned down, it's more like ~12nm now.

    Interestingly TSMC on its current 7nm N7-process already archives virtually the very same density Intel failed to archive on their initial 10nm-process back then – while their current 10nm+, which has a toned-down density from initially 2.7× down to only some 2.0—2.2× (depending on what sources you're willing to trust), is rumoured to rather equal some 12nm-ish alike process instead of being closer to any initial Intel'ian 10nm.

    So while Intel somehow failed, others archived the same density-goals Intel was trying to do for years, to no greater avail – and those others where even on track as scheduled most of the time. Thus, TSMC already fabs on a process which would equal Intel's very initial 10nm-process, which never really saw any greater light of day, bar that known i3-8121U (well, and that m3-8114Y of course, ... you don't know a thing about it, okay?).
  • GraveNoX - Thursday, November 14, 2019 - link

    Yes, they will launch 10nm and 7nm on the same day so you have the freedom to choose which version of the chip you want.
  • Oliseo - Thursday, November 14, 2019 - link

    "Based on my imagination Intel will destroy AMD"
  • Santoval - Thursday, November 14, 2019 - link

    It is meaningless to compare Intel's 7nm parts, which will be released in 2021 assuming NO delays (thus more realistically in 2022+) with AMD's current 7nm parts. If you were going for a "node for node" comparison that is even more meaningless, because Intel's 7nm node will be equivalent to TSMC's 4nm or 5nm node in transistor density (I have read numbers predicting ~185 million transistors per mm^2 for TSMC's 5nm node and ~200nm MTr/mm^2 for Intel's 7nm node). TSMC's 5nm node will almost certainly be released before Intel's 7nm node by the way.

    Regarding Intel's 10nm node parts, while Sunny Cove appears to have a higher IPC than Zen 2 Intel's 10nm parts suffer from much lower clocks which have eaten away all or almost all the IPC gains. This is why Intel have not announced an Ice Lake-S/H release and intend to replace it with Comet Lake-S/H. S/H parts require high clocks, which cannot be provided by Intel's 10nm+ node due to very low yields at high clocks. Only low power Ice Lake-U/Y parts and Ice Lake Xeons will be released. Why? Because these parts have lower clocks.

    More or less the same thing might be repeated with Tiger Lake, in 2H 2020, which would mean that Intel are not very confident of fixing their 10nm node issues even with their 10nm++ node variant. It is rumored that there will be no Tiger Lake-S/H parts and Rocket Lake-S/H will take their place. What's Rocket Lake? A 14nm+++++++ part but with a new μarch (Sunny or Willow Cove cores and a Gen11 or Gen12 iGPU).
  • Santoval - Thursday, November 14, 2019 - link

    edit : "and ~200 MTr/mm^2 for Intel's 7nm node".
  • Targon - Thursday, November 14, 2019 - link

    Didn't TSMC start 5nm risk production a month or so ago?
  • John_M - Monday, November 25, 2019 - link

    Yes, they did.
  • drunkenmaster - Thursday, November 14, 2019 - link

    That's not really how it works. 10nm for Intel is struggling for clock speed and yields but more importantly the architecture remains fairly close to the same overall. What architecture gains they made they appear to be losing in clock speed.

    The only really big advantage AMD has from 7nm TSMC, aside from also lower clocks, is die size. They can fit 16 cores into a die size that Intel can, but can't easily. Frankly the only direct benefit Intel will get from 10nm is the fact that a similar 16 core chip would hopefully use the same power as AMDs, though with the way their 10nm is working due to poor yields and lacking clock speeds they would likely up the voltage to push clock speeds as high as possible and well, there is a reason their 10/12 core stuff isn't coming out on 10nm any time soon.

    AMD gained performance on architecture but they were already close to this competitive with their 8 core 14nm chip (a worse than Intel 14nm node). As with what Intel would gain, AMD gained in performance/watt and die size, the actual competitiveness of their chips is purely due to being a competitive architecture. Intel won't magically destroy them, they'll be in the same ballpark.

    This is the thing people have been kind of weird about, a year ago Zen 1 wasn't far behind Intel on a significantly worse '14nm' node that was more like a lets say 18nm node and Intel had the best 14nm node by a mile. While AMD have a headstart on better nodes when Intel gets to 10nm it's only in the same ballpark as TSMC 7nm. Even when Intel finally make the leap they are only gaining parity with TSMC, not an advantage and their architectural advantage is not really there. With Icelake the only 'win' is AVX512, which is basically unused for the home market.

    10nm won't enable Intel to destroy anyone, if their 10nm started working well it would enable them to drop to similar power usage as AMD and offer similar core counts in the same die sizes, the architectures are close enough that the chips would be pretty damn close in overall performance.

    Though that's assuming Intel managed a 16 core with a ringbus which is one of the major reasons for it's competitiveness in areas like gaming. If they have to go to a mesh for 16 core then they lose that advantage.
  • Spunjji - Thursday, November 14, 2019 - link

    Solid assessment. Thanks for the post :)
  • lobz - Thursday, November 14, 2019 - link

    My dear friend, intel's 10nm chips will not be superior in performance, because they can't hit either the same core count or the frequency needed to let the high IPC matter. Also, by the time intel's 7nm comes out, AMD will have a better than the current process and an even faster architecture. We're speaking at least 2 years from now. Destroy AMD's chips? Good luck.
  • drothgery - Thursday, November 14, 2019 - link

    Except at the highest core counts (on the desktop, HEDT, and server), Intel has much more of a pricing problem with Ryzen than a performance problem.
  • Phynaz - Thursday, November 14, 2019 - link

    I would say AMD has the pricing problem, as Intel sets their own price, where AMD has to react to intel.
  • Death666Angel - Friday, November 15, 2019 - link

    Intel: "We just decided to drop the 10xxx HEDT pricing by 50% or so. No reaction to any of our competitors. Totally our own decision. Move along, nothing to see here." I get that.
  • Qasar - Saturday, November 16, 2019 - link

    intel sets its own price, maybe before, but not now. as Death666Angel said, intel didn't drop the price of its 10xx series cpus cause they were being nice, it was because they can't charge that much any more.
  • John_M - Monday, November 25, 2019 - link

    That's such a dumb argument. If course they'll be competitive on price if they are priced to compete! You can say that about anything. The sentence reduces to nothing.
  • AIV - Thursday, November 14, 2019 - link

    Market pricing for intel HEDT processors have already halved. eg. i9-9980XE is now available for ~1050EUR. See https://geizhals.eu/intel-core-i9-9980xe-extreme-e... .
  • Jorgp2 - Thursday, November 14, 2019 - link

    That's retailers lowering prices to clear stock, has nothing to do with Intel
  • Slash3 - Thursday, November 14, 2019 - link

    Intel is subsidizing retailer discounts in an effort to smooth over the transition to the newer Cascade Lake-X chips. It's part of their $3B marketing and incentive campaign. Subsequent fulfillment from Intel is at the reduced (but unofficial) lower pricing.

    https://www.computerbase.de/2019-10/high-end-cpu-i...

    Intel gets to clear remaining stock and retailers avoid taking a bath on previously purchased CPUs. Makes sense.
  • Spunjji - Thursday, November 14, 2019 - link

    If retailers had to drop prices that much to clear stock and Intel weren't chipping in to compensate, we'd definitely be hearing about it.
  • Phynaz - Thursday, November 14, 2019 - link

    Intel will be price protecting them.
  • Ratman6161 - Thursday, November 14, 2019 - link

    Personal opinion from someone who is NOT in the HEDT market: People whose work/Livelihood comes from tasks that are noticeably better on an HEDT platform, probably don't care about the price or at least don't care as much as the rest of us do. All the math works out differently if you can actually take advantage of the things other than the CPU that HEDT offers. So I think there is still a pretty firm dividing line between x299 and thread ripper on one side and everything else in this test on the other.

    Just saying that price only becomes a deciding factor AFTER you decide which side of that line you are on.
  • AIV - Thursday, November 14, 2019 - link

    3950X blurs the line between HEDT and Desktop market. New 16 core EPYC rome (e.g 7302P is less than 1000EUR) along threadripper makes the market segmentation even more blurry. Especially at ~16 cores there are many alternatives in multiple product families.
  • phoenix_rizzen - Thursday, November 14, 2019 - link

    Yeah, you can now start mixing and matching CPUs based on your other needs (PCIe lanes, I/O support, memory channels, memory speeds, graphics, etc).

    Ryzen CPUs give you dual-channel memory and 24 (16+4+4) PCIe lanes.
    Threadripper CPUs give you quad-channel memory and 88 (64+16+8) PCIe lanes.
    EPYC CPUs give you octo-channel memory and 128 PCIe lanes.

    Figure out how much memory and I/O you need, then choose the CPU with the number of cores you want.

    From 2-core Athlons with integrated graphics to 64-core monsters, there's plenty of choices along the way. :)

    8-core Ryzen 7
    8-core EPYC

    12-core Ryzen 9
    12-core EPYC

    16-core Ryzen 9
    16-core Threadripper
    16-core EPYC

    24-core Threadripper
    24-core EPYC

    32-core Threadripper
    32-core EPYC
  • lobz - Thursday, November 14, 2019 - link

    It's still a bad deal for that price. That CPU is on a dead platform.
  • Phynaz - Thursday, November 14, 2019 - link

    Kinda like Threadripper, eh?
  • yeeeeman - Thursday, November 14, 2019 - link

    This CPU basically renders Intel CPU until 1000$ useless. Only 9900KS brings something extra in gaming and general app usage. Threadripper will do the same for higher end 1000$+ market. This is the first time in many many years when AMD is better in pretty much all price categories, period.
  • Total Meltdowner - Thursday, November 14, 2019 - link

    Glad I bought AMD Stock 3 years ago. Wish I had invested more... sigh.
  • Holliday75 - Thursday, November 14, 2019 - link

    I'm right there with you. I've made a good chunk of change in AMD stock the last 5-6 years, but not as much as I should have. Played it way to conservative. Hindsight sucks.
  • imaheadcase - Thursday, November 14, 2019 - link

    Buying stock based on a CPU is the worst thing you could of did. I hear that a lot, its almost as if people think Intel and AMD just make CPUs as only business.
  • Spunjji - Thursday, November 14, 2019 - link

    It's a much larger part of AMD's business than Intel's, and it marks the difference between them being profitable and not profitable - so while you're theoretically sort-of right, in practice, not so much.
  • itproflorida - Thursday, November 14, 2019 - link

    Not here to defend Intel but a 9700K OC'd to 5Ghz will give a 9900KS a run for its money in gaming, its still outscores the 9900K, KS in a many games and likewise the 3900 and 3950.
  • Notmyusualid - Friday, November 15, 2019 - link

    Turning off Hyper-Threading, and soaking up the extra cache nicely takes care of that...
  • UglyFrank - Thursday, November 14, 2019 - link

    It makes me wonder where Intel could be if they weren't on their 5th year of 14nm or their 4th year of 'skylake'.
    Well done to AMD, maybe I'll buy a big Navi GPU next year if they can get that right too, although I have more faith in Nvidia than I have in Intel.
  • Targon - Thursday, November 14, 2019 - link

    Intel made the mistake of linking the architecture to the fab process. If Intel didn't do that, then would we have seen actual architecture improvements from Intel on the 14nm process? AMD has been in the position where Zen to Zen+ to Zen2 could theoretically have been done on any process node, though power draw, clock speeds, and size would have been different and potentially an issue.
  • Kishoreshack - Thursday, November 14, 2019 - link

    Curious
    Why it didn't get gold award from Anandtech
    ian Cutress any explanation?
  • lukx - Thursday, November 14, 2019 - link

    New Threadripper will get gold :)
  • Ian Cutress - Thursday, November 14, 2019 - link

    I only give out recommended or nothing. I'm not a fan of platinum/gold/silver awards. Recommended is the highest award from me.
  • rret - Thursday, November 14, 2019 - link

    we have nearly 400 million dollars - you give us gold award and then we release the money to you. my lawyer will call you...either that or my attorney general. stay close to phone!
  • Kishoreshack - Friday, November 15, 2019 - link

    That's a good thing
    I hope we scrap out gold awards in all together
    Only recommended is what matters
    soo we know all Anandtech recommendations carry same weight
    Hope to see you implement this
  • Oliseo - Thursday, November 14, 2019 - link

    No brown envelope at the usual deadrop.
  • Irata - Thursday, November 14, 2019 - link

    Just curious: Were the Intel systems fully patched, including the new JCC Microcode update ?
  • ydeer - Thursday, November 14, 2019 - link

    Excellent question.
  • Ian Cutress - Thursday, November 14, 2019 - link

    Did I run 30hr+ of tests on each of those Intel processors with a patch from an announcement 24 hours prior? No, of course not.
  • Irata - Thursday, November 14, 2019 - link

    I did not expect this to be honest. Must not be fun testing with the different security related updates from Intel or Bios patches from AMD that come out regularly.

    Might be worth adding a mention to the review since the JCC and Zombieload 2 updates can have a performance impact (as per Phoronix).
  • Irata - Thursday, November 14, 2019 - link

    Oh, and not to be rude - thank you very much for this through review Ian.
  • Flying Aardvark - Friday, November 15, 2019 - link

    Can you at notate that in the review in large bold print, and that it's likely to affect performance, possibly dramatically for the Intel CPUs?
  • lukx - Thursday, November 14, 2019 - link

    what where the temperatures? can you use air cooling?
  • Ian Cutress - Thursday, November 14, 2019 - link

    Different software was saying different values depending on which sensor. I'm going to go back at some point and see if I can figure out why some were +30C over others.
  • eastcoast_pete - Thursday, November 14, 2019 - link

    Thanks Ian! I agree that, right now, the 3950X is King of the Hill in the HEDT space. Two minor flies in the otherwise good Zen 2 ointment:
    1. To fit the 3950X into the target thermal envelope, AMD reserves the best 7 nm dies for the 3950X, at least for now (pending Threadripper). While that makes sense business wise, it means a much lower chance for the rest of us to score a great die in the binning lottery. The fun with earlier Zen chips was that one could, with some luck, get a great die in a mainstream chip. Apparently, no more. The mainstream Ryzens are still very good, but the idea of getting a little something extra added appeal.

    2. AMD has advertised the 3950X as targeted for liquid cooling. I know a lot of those who buy it will do so anyway, but there are some "air heads" left, including this one. I look forward to a comparison of liquid vs high-end and mainstream air coolers.

    Overall, a great chip, that will keep Intel running to catch up, and that's good for all of us!
  • hansmuff - Thursday, November 14, 2019 - link

    Isn't (1) pure speculation? As far as (2) goes, even with the 3900X AMD was saying that the top frequencies need enhanced cooling, and that's not particularly true. Even on a good AIO, we're not seeing the 3900X performing better than on good air. The 3950X has similar thermal envelopes, so I think a good air cooler will be just dandy.
  • eastcoast_pete - Thursday, November 14, 2019 - link

    The binning part (1) was mentioned in an article in golem.de . That site is usually pretty reliable, despite their name.
  • abufrejoval - Friday, November 15, 2019 - link

    While it's one of my favorites, too, I doubt that in this specific case they have any insights AT lacks. I believe they editorialized that part.

    But of course sophisticated binning is a core part of the CPU business these days.

    And another good one with exclusive news gems every now and then is this one: https://elchapuzasinformatico.com/
  • Irata - Thursday, November 14, 2019 - link

    Does AMD consider the 3950X to be HEDT ? Thought it was part of their mainstream platform.
  • Ratman6161 - Thursday, November 14, 2019 - link

    No. the 3950x is not HEDT. Its performance blurs the lines a bit but HEDT is more than just the CPU. Its also about PCIe lanes and RAM capacity etc.
  • Ratman6161 - Thursday, November 14, 2019 - link

    "the 3950X is King of the Hill in the HEDT space"

    The 3950x is not in the HEDT space. That's one of the most important conclusions I take away from this review. 3950x runs on a regular desktop motherboard (albeit an x570 to get full advantage of it). So besides just the price of the CPU itself, it makes everything about the system less expensive as well. My conclusion is that the 3950X is the best overall performance you can get from something that is not HEDT. This makes it much more accessible to us mere mortals...though personally I'm still looking at the 3700X.
  • eastcoast_pete - Thursday, November 14, 2019 - link

    I guess one person's high-end desktop is another one's mainstream. I call a CPU that, yes, costs more than twice than a standard 8 core, yet is also faster than many other "HEDT" CPUs HEDT. In my view, HEDT is still below workstation levels, which have all the goodies you mention such as many more PCIe lanes, quad or more memory channels, support error correcting memory and lots of it, and are often multi-socketed. But then, the prices for these is eye-watering. But, regardless, this is a fast CPU.
  • phoenix_rizzen - Thursday, November 14, 2019 - link

    Time for another category, then. :)

    This is definitely a high-end desktop CPU. Runs in a mainstream desktop motherboard, and performs better than pretty much every other mainstream desktop processor.

    Maybe it's time to call the Intel -X and AMD Threafripper lines "workstation-class" or "high-end workstation" or something along those line.

    So you get desktop, workstation, and server. With low-end/high-end sub-groups for them.

    So Athlon 3000G and Ryzen 3 would be low-end desktop. Ryzen 5 and 7, and the APUs, would be normal desktop. Ryzen 9 would be high-end desktop.
  • eastcoast_pete - Thursday, November 14, 2019 - link

    Also, Ian and other reviewers, please start to (also?) test x265 and AV1 encoding of 4K video files. Working with those could and would be a really good reason to take a long, hard look at the 3950X and similar HEDT CPUs.
  • Jorgp2 - Thursday, November 14, 2019 - link

    That's one of the use cases for AVX512
  • eastcoast_pete - Thursday, November 14, 2019 - link

    That's a key reason for me to ask for those. Really curious how well the 3950X does here against Chipzilla's offerings. If the AMD chip holds it's own there, it'd also be the value king for video editing on a budget.
  • Da W - Thursday, November 14, 2019 - link

    Seems like the 3900X is enough for me.
  • Total Meltdowner - Thursday, November 14, 2019 - link

    For $250 less I agree with you. If you want the best of the best in an AM4 Socket, the 3950x is it, though.

    Does the 3950x work in an x370 MB?
  • Ratman6161 - Thursday, November 14, 2019 - link

    It probably depends on what x370 motherboard you have. I've got an Asus Prime x470 Pro and checking its CPU compatibility list it supports the 3950x. While I was there I looked at the older Prime x370 Pro and Asus claims it supports the 3950x too. So check with the manufacturer and also take a look at all the other components of your system like the power supply for example, to make sure they are also up to the task. Also except that you wouldnt get PCIe 4.0 on anything less than an x570.

    On the other hand, if you are springing for a $749 CPU, why not just get a new motherboard too. There are some decent x570 boards out there under $150.
  • eva02langley - Thursday, November 14, 2019 - link

    Because they are trash. The x470/370 is having better entry boards. I will wait for someone posting their experience to make the call. So far, if I buy a 3900x, I will stick to my actual board, other tested it and found no real performance limitations.
  • willis936 - Thursday, November 14, 2019 - link

    Obligatory reminder that the current gaming test suite is not useful. These games are GPU limited and no one should spend more than $200 on a CPU for most games. Counter strike would be a proper test for people who want 1000 fps on older games.
  • Ian Cutress - Thursday, November 14, 2019 - link

    So, World of Tanks at 720p doesn't fit that space? 660 FPS !
  • alpha754293 - Thursday, November 14, 2019 - link

    "All of our benchmark results can also be found in our benchmark engine, Bench."

    At the time of this writing, the new AMD Ryzen 3950X cannot be found in the Bench database.
  • Ryan Smith - Thursday, November 14, 2019 - link

    Fixed. Thanks!
  • Manabu - Thursday, November 14, 2019 - link

    Why is the Ryzen 3700X is missing from so many charts through the article, including the final 2019 performance vs price chart? And it's already about time to have tested the 3600 too.
  • Ian Cutress - Thursday, November 14, 2019 - link

    Gavin tested the chips for the 3900X/3700X review. I've done this review with updated OS. Plus I've been moving house, and everything is still in boxes.
  • Adonisds - Thursday, November 14, 2019 - link

    Is the 4.7 GHz boost fake news?
  • Total Meltdowner - Thursday, November 14, 2019 - link

    AMD is like Trump here, telling the truth.
  • III-V - Thursday, November 14, 2019 - link

    Unlike Trump though, the truth doesn't even matter here. Performance is all that matters, not a completely irrelevant number
  • Ian Cutress - Thursday, November 14, 2019 - link

    There's a page that covers it.
  • Irata - Thursday, November 14, 2019 - link

    Just checked two other reviews (so far)
    - Hexus reached 4,665.54 Mhz
    and bit-tech stated "With a 4.7GHz boost, which we actually saw regularly, it wasn't surprising to see the Ryzen 9 3950X top the Cinebench single-threaded test,"
  • Total Meltdowner - Thursday, November 14, 2019 - link

    Baller CPU
  • neogodless - Thursday, November 14, 2019 - link

    Is the Intel 9980XE pricing correct in the 2019 CPU Performance chart? It appears on Intel's web site to be $1979-1999, and the cheapest I found it online was on sale at Amazon for $1949.
  • Ian Cutress - Thursday, November 14, 2019 - link

    I moved it to $979 because that's the price of the upcoming 10980XE, which hasn't been released but has some extra frequency, so it should score 'at least' there.
  • platinumjsi - Thursday, November 14, 2019 - link

    The Geekbench multicore results look very low for the 9980XE, Hot Hardware and OC3D's reviews of that chip put it at around 43k and the Geekbench browser puts non overclockable workstations at around 55k.

    Was multicore enhancement off for Intel and PBO on for AMD?
  • blppt - Thursday, November 14, 2019 - link

    If I had to guess, it looks like maybe they have turbo completely disabled on both the 9980XE and the 7980XE, meaning in the case of the 7980XE, it will never clock higher than 2.6ghz. Or maybe they included scores for the 32-bit test for those two by mistake?

    See my post below---I regularly get 52-53K in that benchmark, no overclocking and no high clock ram.
  • blppt - Thursday, November 14, 2019 - link

    Something is really wrong with your 7980XE setup---getting 30K in Geekbench 4???

    Granted I have the multi-core enhancement enabled in the BIOS, but I get 52-53K consistently, no overclocking. Using standard 2600 DDR4.

    https://browser.geekbench.com/v4/cpu/14797740
  • Count Rushmore - Thursday, November 14, 2019 - link

    Hmm... seem like for rendering machine, Threadripper is the way to go. I thought I could build 'cheap' rendering machines with 3950... but that 2 memory channels seem inadequate. Looking fwd to 25th!
  • Oliseo - Thursday, November 14, 2019 - link

    I would say the dual memory channel makes it a "prosumer" choice rather than a professional.

    Amazing value though for someone just starting out their career. That level of performance at home without breaking the bank.

    Not bad at all.
  • Count Rushmore - Thursday, November 14, 2019 - link

    No doubt about the value... Would love to see more people getting into 3D rendering
  • icoreaudience - Thursday, November 14, 2019 - link

    When is anandtech going to use a modern compressor like Zstandard for the encoding test ?
    It's a great fit for multi-threading tests !
  • itproflorida - Thursday, November 14, 2019 - link

    Great so the 9700k is still the price, performance gaming king.
  • eek2121 - Thursday, November 14, 2019 - link

    Ian, upgrade 1080. Your gaming benchmarks are very clearly GPU bound at this point.
  • eva02langley - Thursday, November 14, 2019 - link

    Yeah, because the average Joe is owning a 2080 TI to play at 1080p...
  • blppt - Thursday, November 14, 2019 - link

    Believe it or not, you need a 2080Ti to play 1080p at max settings smoothly in RDR2 at the moment.

    My oc'd 1080ti (FTW3) chokes on that game at 1080p/max settings.
  • itproflorida - Thursday, November 14, 2019 - link

    Not.. 1440p 78 fps avg for RDR2 Benchmark and in game 72 fps avg maxed settings @ 1440p, 2080ti and 9700k@5Ghz.
  • blppt - Thursday, November 14, 2019 - link

    The 2080ti and other 2xxx series cards do MUCH better in RDR2 than their equivalent 10-series cards. Look at the benchmarks---we have Vega 64s challenging 1080tis in this game. That should not happen.

    https://www.guru3d.com/articles_pages/red_dead_red...
  • Ian Cutress - Thursday, November 14, 2019 - link

    I have 2080 Ti units standing by, but my current benchmark run is with 1080s until I do a full benchmark reset. Probably Q1 next year, when I'm back at home for longer than 5 days. Supercomputing, Tech Summit, IEDM, and CES are in my next few weeks.
  • Dusk_Star - Thursday, November 14, 2019 - link

    > In our Ryzen 7 3700X review, with the 12-core processor

    Pretty sure the 3700X is 8 cores.
  • Lux88 - Thursday, November 14, 2019 - link

    Not a single compilation benchmark...
  • Ian Cutress - Thursday, November 14, 2019 - link

    Having issues getting the benchmark to work on Win 10 1909, didn't have time to debug. Hoping to fix it for the next benchmark suite update.
  • Lux88 - Thursday, November 14, 2019 - link

    Thanks!
  • stux - Thursday, November 14, 2019 - link

    Sad,

    Desperately want to know if the 3950x will make a good developer workstation. 64GB of Ram and a fast nvme, or is it going to be memory bandwidth bottlenecked... and I’ll need to step up to TR3.
  • tmanini - Thursday, November 14, 2019 - link

    depends on your development needs: in the article is states dual-channel memory. Not 4 or 6 channel.
  • Spunjji - Thursday, November 14, 2019 - link

    I have a question about the power numbers - do they look significantly different with only one thread loaded per core?
  • ksec - Thursday, November 14, 2019 - link

    If we look at the benchmark running on Open Source program, it is clear AMD tends to have a much higher chance of performance being on par or beating Intel. I wonder how much optimisation from compiler to other library giving advantage to Intel and not to AMD.
  • Maxiking - Thursday, November 14, 2019 - link

    Pretty sad cpu, bottlenecking ancient 1080gtx at 1080p. Just lol
  • Qasar - Thursday, November 14, 2019 - link

    come on maxiking, the 9xxx cpu's are that bad.. after all they need the extra frequency just to keep what little performance advantage they, at times, barely still have.
  • stux - Thursday, November 14, 2019 - link

    Great review, but where are the compilation benchmarks?
  • Ian Cutress - Thursday, November 14, 2019 - link

    I was having issues getting the benchmark to work on Win 10 1909, and didn't have time to debug and retest. I'm hoping to fix it for the next benchmark suite update.
  • stux - Thursday, November 14, 2019 - link

    Thanks Ian, looking forward to the update.
  • kc77 - Thursday, November 14, 2019 - link

    I don't see the TDP comparisons with the Intel rig. Are they there? I see AMD TDP mentioned but not the Intel parts.
  • willis936 - Thursday, November 14, 2019 - link

    I moved to the midwest recently and I have to wonder: Who is christ and why does everyone care what CPU he has?
  • 529th - Thursday, November 14, 2019 - link

    There is also a Windows power plan called 'Ultimate' What's better for AMD Ryzen CPUs, RHP or the plain old Windows Ultimate power plan?
  • TallestJon96 - Thursday, November 14, 2019 - link

    Stop testing with a GTX 1080. It is almost 4 years old, and there are half a dozen cards that are significantly faster than it now. Anandtech's gaming benchmarks are behind the times and not very useful because of this.
  • Hadenas - Thursday, November 14, 2019 - link

    Please start using either a 2080 or 2080ti for gaming test. Nobody would use a 1080 with the 3950x.
  • Satyyri1972 - Wednesday, November 20, 2019 - link

    I am
  • CHADBOGA - Thursday, November 14, 2019 - link

    Always wonderful to see the worst gaming setup for benchmarking, anywhere on the internet.
  • SanX - Friday, November 15, 2019 - link

    where is source code for 3D particle movement? In the link above are only exe files and it is unclear what the test is doing. May be it is doing just one single operation in cache over and over again.
  • lenghui - Friday, November 15, 2019 - link

    I am guilty of not reading the entire article, and will definitely come back again to read it all. At a glance, I love how AnandTech includes the prices so that I don't have to keep looking it up. However, I am still unable to figure out what RHP vs HP means half way through the charts. I wish there is a simple legend to explain that. I tried looking around and didn't find any references. Again, I am guilty of not reading it all, just saying it would be super convenient when I want to look at the results in a crunch.
  • lenghui - Friday, November 15, 2019 - link

    Thanks for the review, Ian. I finally got to the last page and saw what HP and RHP means, but my earlier comment still stand.

    To me personally, I never ran Ryzen Master since the first two days I built my Ryzen machine because Ryzen Master can only launch when Virtualization Based Security (VBS) is disabled. This is really lame on AMD part. I built this 8 core-16 thread machine to run my VMs, but I can't run Ryzen Master :( Other than that, I love my build.
  • GreenReaper - Friday, November 15, 2019 - link

    This piece is a great endorsement of the 3900X, or the 3700X for desktop tasks. Why pay more?
    You profit from being willing to accept imperfect cores - and let's not forget a cooler is thrown in.

    Testing for the latest issues will be fun; not only do you have before/after microcode patches, you have before/after the program under testing was recompiled to avoid the situation requiring patching.
  • Sailor23M - Friday, November 15, 2019 - link

    3700X missing in the price/perf graph.
  • rocky12345 - Friday, November 15, 2019 - link

    Great review thanks. I do have to nit pick though. You seem to be scolding AMD for the Zen 2 CPU's slightly going over the stated power limits. I found this a bit odd because of you look at Intel CPU's and how they sated their power limits. You will hardly ever see them coming even remotely close to the stated power limit because their CPU's under full load are pulling much more power than stated by Intel.

    Example the 9900K/9900KS have been seen pulling almost double the power as stated by Intel under stock conditions. Yes I understand AMD & Intel list their TDP power limits differently and on the Intel side those numbers mean very little unless you lock them at their base clocks without ever letting them go into turbo boost mode.
  • kc77 - Friday, November 15, 2019 - link

    I noticed that as well. I'm noticing less and less TDP comparisons.
  • ajlueke - Friday, November 15, 2019 - link

    " the motherboard choice (as well as the cooling it uses) will matter."

    AMD has really been pushing the idea that VRMs and cooling will affect Ryzen 3000 series performance (likely to sell X570 motherboards), but I haven't come across any actual supporting data.

    In my experience, the Ryzen 3000 series winds up voltage bound by FIT, well below excessive temperature or PPT, TDC, EDC bounds. Such that any additional headroom, like from a custom loop or better VRMs accomplishes nothing.

    Now, the statement I quoted above is pretty definitive that it does matter. So I imagine, there is data supporting it. Care to share?
  • rickderick7 - Friday, November 15, 2019 - link

    I bought AMD Ryzen 9 3950X last week, and set it up on my CPU , but since then my printer stopped working. Is that something related to this device? I even tried setting up the printer again with the below steps in the link, but still no luck. I need my printer working as before.

    Could someone advice on this?
    Thanks
    Rick
  • rickderick7 - Friday, November 15, 2019 - link

    I bought AMD Ryzen 9 3950X last week, and set it up on my CPU , but since then my printer stopped working. Is that something related to this device? I even tried setting up the printer again with the below steps in the link, but still no luck. I need my printer working as before.
    https://123hp-com-setup.us/123hp-oj5255-wireless-s...
    Could someone advice on this?
    Thanks
    Rick
  • Drazick - Saturday, November 16, 2019 - link

    It has nothing to do with the CPU be it Intel or AMD.
  • Supercell99 - Friday, November 15, 2019 - link

    All these comments and I haven't read one useful one yet.
  • Rudde - Friday, November 15, 2019 - link

    It appears that AMD's PPT power limit is 35% higher than their TDP.
  • Drazick - Saturday, November 16, 2019 - link

    It is about time we move to Quad (4) Memory Channels in consumer computers.
  • csell - Saturday, November 16, 2019 - link

    Or DDR5 RAM. I expect the DDR5 RAMs to arrive first.
  • Drazick - Sunday, November 17, 2019 - link

    The DDR Technology is orthogonal.
    I want Quad and the latest memory available.
  • guyr - Friday, December 20, 2019 - link

    Anything is possible, of course. 5 years ago, who would have predicted 16 cores in a consumer-oriented CPU? However, neither Intel nor AMD has made any moves beyond 2 memory channels in the consumer space. The demand is simply not there to justify the increase in complexity and price. In the professional space, more channels are easily justified and the target market doesn't hesitate to pay the higher prices. So, it's all driven by what the market will bear.
  • alufan - Saturday, November 16, 2019 - link

    weird intel launches its chip a couple of weeks ago and it stayed upfront and main story for over a week, AMD launches what is in effect the best CPU ever tested by this site and it lasts a few Days before being pushed aside for another intel article am sure the intention by the reporters is to be fair and unbiased however I can see how the commercial motives of the site are being manipulated looks like intels up to its old tricks again, the thread ripper article lasted even less time but no chips have been tested(or at least released) yet which I guess makes sense
  • penev91 - Sunday, November 17, 2019 - link

    Just ignore everything Intel/AMD related on Anandtech. There's been an obvious bias for years.
  • Atom2 - Saturday, November 16, 2019 - link

    There has never been a situation as big as this one, where the bench software was benchmarked more than the hardware. Comprehensive overview of historic software development? Whatever the reason, it seems that keeping back AVX512 to only select few CPUs, was an unfortunate decision by Intel, which only contributed to the situation. Yes, you know, if you compile your code with compiler from 1998 and ignore all the guidelines how to write fast code ... Voila... For some reason however, nobody tries to run 20 year old CPU code on GPU though.
  • chrkv - Monday, November 18, 2019 - link

    Second page "On the Ryzen High Performance power plan, our sustained single core frequency dropped to 4450 MHz" - I believe just "the High Performance" should be here.
    Page 4 "Despite 5.0 GHz all-core turbo being on the 9900K" - should be "9900KS".
  • Irata - Tuesday, November 19, 2019 - link

    Quick question: Are any of your benchmarks affected by the Mathlab issue (Ryzen are crippled because a poor code path is used due to a vendor ID check for "genuine Intel" )?
  • twotwotwo - Tuesday, November 19, 2019 - link

    Intel's had these consumer-platform-based "entry-level Xeons" (once E3, now E) for a while. Despite some obvious limits, and that there are other low-end server options, enough folks want 'em to seed an ecosystem of rackmount and blade servers from Supermicro, Dell, etc.

    Anyway, the "pro" (ECC/management enabled) variant of Ryzen seems like a great fit for that. 16 cores and 24 PCIe 4 lanes are probably more useful for little servers than for most desktop users. It's also more balanced than the 8/16C EPYCs; it's cool they have 128 lanes and tons of memory channels, but it takes very specific applications to use them all with that few cores (caching?). Ideally the lesser I/O and lower TDPs also help make denser/cheaper boxes, and the consumer-ish clocks pay off for some things.

    The biggest argument against is that the entry-level server market is probably shrinking anyway as users rent tiny slices of huge boxes from cloud providers instead. It also probably doesn't have the best margins. So maybe you could release a competitive product there and still not make all that much off it.
  • halfflat - Thursday, November 21, 2019 - link

    Very curious about the AVX512 vs AVX2 results for 3dPM. It's really unusual to see even a 2x performance increase going from AVX2 to AVX512 on the same architecture, given that running AVX512 instructions will lower the clock.

    The non-AVX versions, I'm presuming, are utilizing SSE2.

    The i9-9900K gets a factor of 2 increase going from SSE2 to AVX2, which is pretty much what one would expect with twice as many fp operations per instruction. But the i9-7960X performance with AVX512 is *ten times* improved over SSE2, when the vector is only four times as wide and the cores will be running at a lower clock speed.

    Is there some particular AVX512-only operation that is determining this huge performance gap? Some further analysis of these results would be very interesting.
  • AIV - Wednesday, November 27, 2019 - link

    Somebody posted that it's caused by 64 bit integer multiplies, which are supported in AVX512, but not in AVX2 and thus fallback to scalar operations.
  • halfflat - Wednesday, November 27, 2019 - link

    For Brownian motion? That seems weird. Nonetheless, it can't alone explain the speed up.

    Most favourable scenario: code consists only of floating point mul and add pairs, together with 64-bit integer multiplication. The floating point operations could become 4x faster in AVX2 (twice as wide as SSE, and using FMAs); to see the observed 2x speed up, that means the floating point operations constituted 2/3 of the execution time in the SSE version.

    The AVX512 version, ignoring any consequent downclocking, could make those floating point operations 8x faster than the SSE case, and the 64-bit integer multiplies also 8x faster. That's still not 10x, it ignores the lower throughput of 8-wide i64 muls compared to scalar muls, and also discounts the slower clock speed.
  • halfflat - Thursday, November 28, 2019 - link

    Just an update: ran a simple test (square eight times all the 64-bit ints in a 1024-long array) wrapped in google benchmark on a Skylake Xeon with gcc-8.2 -O3. The kernel is almost entirely multiplications, and ultimately saw a roughly 2x speed up with AVX512 compared to AVX2, and a 2.5x speed up with AVX512 compared with a 'no architecture specified' compilation.
  • w1p30ut3r - Friday, November 22, 2019 - link

    Its very, very simples. If you gaming lonly buy an intel... If you work and gaming buy a 3950x... If you only work buy a threadripper or a xeon...
  • Parkab0y - Sunday, October 4, 2020 - link

    I really want to see something like this about zen3 5000
  • trusttechbd - Sunday, October 18, 2020 - link

    Intel 9th Gen Core i5-9400 Processor price in bangladesh trusttech
    https://www.trusttechbd.com/product/asus-gaming-gr...
  • madymadme - Saturday, November 7, 2020 - link

    Going to buy
    AMD Ryzen 9 5900X,
    Gigabyte B550 AORUS PRO AC,
    Noctua NH-D15 Dual 140m Fans,
    G.skill Trident Z RGB Series 16GB (2x8GB) 4000 MHz DDR4 Memory F4-4000C18D-16GTZRB

    is corsair CV550 watt ok with the above spec ? & I have Quadro K2000D graphic card
    is this specification ok ? & which ram to get please help a little & thanks for reading & replying

Log in

Don't have an account? Sign up now