JudasGoat 14 hours ago

Do the companies that buy these chips like this pay full price? I've always heard that volume buyers get deals we consumers aren't privy to.

  • ksec 10 hours ago

    They don't. At least up to 5 - 6 years ago I doubt anything has changed. And these are not for consumer anyway.

cebert 20 hours ago

> with support for up to 4.5TB of DDR4-2933 memory

It is astonishing how much RAM modern systems can have. You could store many databases entirely in memory.

  • bigfatkitten 20 hours ago

    With that much RAM, that many cores and with a pile of NVMe storage, it's truly phenomenal how work one box can do (redundancy concerns aside.)

will-burner 15 hours ago

Who buys chips like this and what are they used for?

It's obviously important to push the cutting edge, and it's imaginable that someday a chip like this could be in a personal computer (over 100 cores and over 4TBs of memory!).

The only things I can think, which I don't have experience with so I don't know if it's reasonable, would be high performance instances at data centers, supercomputing, and high performance main frames (is that a thing?).

  • gomerspiles 14 hours ago

    Spending this much on a server's CPUs has been a pretty normal thing in DC/server rooms for a very long time, the rise of X86 in servers is a similar cheapening like the rise of ARM. It actually meant a lot of redesigning of applications to work with many entirely separate machines instead of the smallest large server that could cover the full load and maybe around an application process per core, all sharing direct access to the same storage, etc.

  • Sohcahtoa82 14 hours ago

    > Who buys chips like this and what are they used for?

    I imagine cloud service providers would buy it.

    On a price/performance chart, you'd think you'd be better off buying consumer-grade Core i9 CPUs, or even i7, but that ignores everything else that must be bought, like a motherboard and RAM, and doesn't even account for the added rack space.

    Put 256 GB of RAM next to that CPU and AWS can host 64 c7i.large (2 core, 4 GB) instances for $64/month each, $4,096/month. They'd see RoI in under 5 months for the CPU.

    But really, this CPU is great for any embarrassingly parallel task that doesn't run well on a GPU.

  • rincebrain 8 hours ago

    In any situation where power and cooling are effectively not your bottleneck, density becomes a limit on how much you can put in your server room. (Even if they eventually become your bottleneck, you might conceive of a situation where you build out to 75% capacity in 50% or less of the space to let you put in more efficient future versions in the other physical volume, or something.)

    I would imagine any of the big cloud providers, for the ones providing x86, would probably love to buy something like this to improve the density of what they can offer...

  • m463 11 hours ago

    I would use it for

      make -j 256
    
    I'll bet folks like Linus Torvalds would get reasonable time-savings with his kernel compiles.
  • willcipriano 13 hours ago

    When you do colocation you often pay per rack unit. Fitting more cores into a server means you need less of them and can have a lower monthly bill.

  • chaosbutters314 11 hours ago

    i did. small scale (cfd/fea) simulation when the cloud or company server doesnt make sense. with how much data the model makes, easier to work locally and test it runs before spending time and money on cloud. also, a lot of simulatione dont need to be that big so this is great for working all locally and not having to pay core hours. usually pays for itself in a year frankly

fragmede 14 hours ago

How much per hour do you suppose this'll cost on EC2?

  • Sohcahtoa82 13 hours ago

    Currently a c6i.32xlarge (128 core, 256 GB) is $5.44/hr, or up to $9.68/hr for an r6id.32xlarge with 1 TB of RAM and 4x1900 GB of NVMe.

    Of course, that's not even their biggest instance. For a whopping $109/hr, you can get a u-12tb1.112xlarge with 448 cores and 12 TB of RAM. I really have no idea what you would use that for. Machine learning with neural networks that would put OpenAI to shame?

    • ewzimm 12 hours ago

      $109/hr would be a cheap rate to hire a mechanic to work on a car, so the fact that anyone has access to that kind of compute for that price is pretty impressive!

    • bigfatkitten 9 hours ago

      It wouldn't take you long paying that rate to just buy an equivalent machine (even at MSRP).

      • fragmede 6 hours ago

        Reaching $20,000 at $100/hr is only a week, non-stop. I don't know what it looks like to want to use a machine like that for compute, but that makes the cloud look insanely expensive.