I'm going to ballpark it between 2.5-3x faster than the desktop. Except for the tg128 test, where the difference is "minimal" (but I didn't do the math).
Thanks for the excellent writeup. I'm pleasantly surprised that ROCm worked as well as it did — for the price these aren't bad for LLM workloads and some moderate gaming. (Apple is probably still the king of affordable at-home inference, but for games... Amazing these days but Linux is so much better.)
I switched to Fedora Sway as my daily driver nearly two years ago. A Windows title wasn’t working on my brand new PC. I switched to Steam+Proton+Fedora and it worked immediately. Valve now offers a more stable and complete Windows API through Proton than Microsoft does through Windows itself.
for those who are already in the field and doing these things - if I wanted to start running my own local LLM.. should I find an Nvidia 5080 GPU for my current desktop or is it worth trying one of these Framework AMD desktops?
I had been hoping that these would be a bit faster than the 9950X because of the different memory architecture, but it appears that due to the lower power design point the AI Max+ 395 loses across the board, by large margins. So I guess these really are niche products for ML users only, and people with generic workloads that want more than the 9950X offers are shopping for a Threadripper.
Threadripper very rarely seems to make any sense. The only times it seems like you want it are for huge memory support/bandwidth and/or a huge number of pcie slots. But it's not cheap or supported enough compared to epyc to really make sense to me any time I've been specing out a system along those lines.
I bought a threadripper pro system out of desperation, trying to get secondhand PCIe 80G A100s to run locally. The huge rebar allocations confused/crashed every Intel/AMD system I had access to.
I think the Xeon systems should have worked and that it was actually a motherboard bios issue, but I had seen a photo of it running in a threadripper and prayed I wasn’t digging an even deeper hole.
Yeah I don't get it either. To get marginally more resources than the 9950X you have to make a significant leap in price to a $1500+ CPU on a $1000 motherboard.
It also seems like the tools aren't there to fully utilize them. Unless I misunderstood he was running off CPU only for all the test so there's still the iGPU and NPU performance that's not been utilized in these tests.
No, only a couple initial tests with Ollama used CPU. I ran most tests on Vulkan / iGPU, and some on ROCm (read further down the thread).
I found it difficult to install ROCm on Fedora 42 but after upgrading to Rawhide it was easy, so I re-tested everything with ROCm vs Vulkan.
Ollama, for some silly reason, doesn't support Vulkan even though I've used a fork many times to get full GPU acceleration with it on Pi, Ampere, and even this AMD system... (moral of the story just stick with llama.cpp).
No experimental flag option, no "you can use the fork that works fine but we don't have capacity to support this" just a hard "no, we think it's unreliable". I guess they just want you to drop them and use llama.cpp.
Yeah, my conspiracy theory is Nvidia is somehow influencing the decision. If you can do Vulkan with Ollama, it opens up people to using Intel/AMD/other iGPUs and you might not be incentivized to buy an Nvidia GPU.
ROCm support is not wonderful. It's certainly worse for an end user to deal with than Vulkan, which usually 'just works'.
Hi Jeff, I'm a linux ambassador for Framework and I have one of these units. It'd be interesting if you would install ramalama in fedora and test that. I've been using that out of the box as a drop in replacement for ollama and everything was GPU accelerated out of the box. It pulls rocm from a container and just figures it out, etc. Would love to see actual numbers though.
I've ran a comparison benchmark for the smaller models https://gist.github.com/mhitza/f5a8eeb298feb239de10f9f60f841...
Comparing it against the RTX 4000 SFF Ada (20GB) which is around $1.2k (if you believe the original price on the nvidia website https://marketplace.nvidia.com/en-us/enterprise/laptops-work...). Which I have access to on a Hetzner GEX44.
I'm going to ballpark it between 2.5-3x faster than the desktop. Except for the tg128 test, where the difference is "minimal" (but I didn't do the math).
Thanks for the excellent writeup. I'm pleasantly surprised that ROCm worked as well as it did — for the price these aren't bad for LLM workloads and some moderate gaming. (Apple is probably still the king of affordable at-home inference, but for games... Amazing these days but Linux is so much better.)
I switched to Fedora Sway as my daily driver nearly two years ago. A Windows title wasn’t working on my brand new PC. I switched to Steam+Proton+Fedora and it worked immediately. Valve now offers a more stable and complete Windows API through Proton than Microsoft does through Windows itself.
for those who are already in the field and doing these things - if I wanted to start running my own local LLM.. should I find an Nvidia 5080 GPU for my current desktop or is it worth trying one of these Framework AMD desktops?
If you think the future is small models (27B) get Nvidia; if you think larger models (70-120B) are worth it then you need AMD or Apple.
Jeff - check out the distributed-llama project...you should be able to distribute over entire cluster
He mentioned that in the video.
I was about to be annoyed until you said you got preprod units. I guess I'll have to build on this when my desktop shows up.
I had been hoping that these would be a bit faster than the 9950X because of the different memory architecture, but it appears that due to the lower power design point the AI Max+ 395 loses across the board, by large margins. So I guess these really are niche products for ML users only, and people with generic workloads that want more than the 9950X offers are shopping for a Threadripper.
Sounds about right.
I’m struggling to justify the cost of a Threadripper (let alone pro!) for a AAA game studio though.
I wonder who can justify these machines. High frequency trading? data science? shouldn’t that be done on servers?
Threadripper very rarely seems to make any sense. The only times it seems like you want it are for huge memory support/bandwidth and/or a huge number of pcie slots. But it's not cheap or supported enough compared to epyc to really make sense to me any time I've been specing out a system along those lines.
I bought a threadripper pro system out of desperation, trying to get secondhand PCIe 80G A100s to run locally. The huge rebar allocations confused/crashed every Intel/AMD system I had access to.
I think the Xeon systems should have worked and that it was actually a motherboard bios issue, but I had seen a photo of it running in a threadripper and prayed I wasn’t digging an even deeper hole.
Yeah I don't get it either. To get marginally more resources than the 9950X you have to make a significant leap in price to a $1500+ CPU on a $1000 motherboard.
It also seems like the tools aren't there to fully utilize them. Unless I misunderstood he was running off CPU only for all the test so there's still the iGPU and NPU performance that's not been utilized in these tests.
No, only a couple initial tests with Ollama used CPU. I ran most tests on Vulkan / iGPU, and some on ROCm (read further down the thread).
I found it difficult to install ROCm on Fedora 42 but after upgrading to Rawhide it was easy, so I re-tested everything with ROCm vs Vulkan.
Ollama, for some silly reason, doesn't support Vulkan even though I've used a fork many times to get full GPU acceleration with it on Pi, Ampere, and even this AMD system... (moral of the story just stick with llama.cpp).
Sadly, the reason they give is subjectively terrible:
https://x.com/ollama/status/1952783981000446029
No experimental flag option, no "you can use the fork that works fine but we don't have capacity to support this" just a hard "no, we think it's unreliable". I guess they just want you to drop them and use llama.cpp.
Yeah, my conspiracy theory is Nvidia is somehow influencing the decision. If you can do Vulkan with Ollama, it opens up people to using Intel/AMD/other iGPUs and you might not be incentivized to buy an Nvidia GPU.
ROCm support is not wonderful. It's certainly worse for an end user to deal with than Vulkan, which usually 'just works'.
Hi Jeff, I'm a linux ambassador for Framework and I have one of these units. It'd be interesting if you would install ramalama in fedora and test that. I've been using that out of the box as a drop in replacement for ollama and everything was GPU accelerated out of the box. It pulls rocm from a container and just figures it out, etc. Would love to see actual numbers though.
Great work on this!