This is the most exciting system we’ve had the chance to put together in a very long time. Computational Fluid Dynamics is a field that requires more CPU cores than the number of syllables in its name. Our client for this build wanted a machine that was up to the task of optimising the aerodynamics of supercars – the way this is done is by simulating the airflow over 3D model of a car, modify the model slightly, and do it over and over and over again, and use these results to find optimal setup for the car.
His previous system was able to run through an iteration in just 20 seconds, which makes sense, considering this was a dual Xeon system. We had some work to do if we wanted to outperform that.
Even with all of our combined knowledge and experience, building a PC optimized for these kinds of tasks required a lot of research. Normally you just have to think, “Okay, so he’s working with these programs, so we need all the cores we can find, and a ton of RAM. Clock speed and GPU would be good, but not the priority here,” but this was like working with alien technology. Yes, you need a lot of cores, but to get the optimal performance here, you need to go as deep as the actual architecture of the CPU. Many of the most powerful CPUs on the market just weren’t right for this job.
With a strict deadline of 3 weeks from order to delivery, we were hard pressed to find a suitable option, but we were lucky enough to get a hold of a pair of Xeon 5220R CPUs. Each of these is fitted with 24 cores – physical cores with hyperthreading, so that leaves us with an incredible 96 CPU threads.
Let’s put this into perspective. I7 CPUs are considered to be really powerful processors in desktop PCs. The current generation of i7 processors have 8 hyperthreaded cores, which gives us 16 threads. Just one Xeon CPU has 3 times as many threads as the latest i7s, and this computer has two of them. They’re not quite as fast as i7 processors, but considering what this computer is going to be used for, the speed we’re getting is fantastic.
This is a great start, but there’s still a long way to go.
Motherboard and RAM
Next up is the motherboard. This generation of Xeons fits into the LGA3647 socket, which is not common at all outside of rackmount servers. The one we ended up using had some great features, including 16 RAM slots, but it also brought with it a lot more confusion. Incoming techno-babble:
How it works on normal motherboards is that every pair of slots is one “channel”. You’d fill up one slot from each channel to get the most out of your RAM, and then you’d fill the other slots of each channel. So, first instinct, 16 slots means 8 channels, which would be 4 channels per CPU. This would give you a choice between 8 sticks of either 16GB or 32GB – 128GB or 256GB in total. You will be so surprised when you hear that 192GB RAM is better than 256GB here. I don’t mean that it’s better value for money; I mean it gives you better performance outright, unless you actually need to use that amount of RAM. The reason for this is that the 8 slots per CPU aren’t actually 4 channels with 2 slots each, they’re 4 channels with 1 slot each, and 2 channels with 2 slots, giving us 6 channels per CPU. The second slot of those dual-slot channels shouldn’t actually be used unless the capacity is absolutely necessary though, as it greatly limits the speed of all the RAM. So the conclusion with the RAM was 12 x 16GB: 192GB. Of course, being a server board, this RAM has to be dual rank ECC Registered RAM.
If that last bit was a lot to take in, don’t worry, understanding it is our job.
The RAM wasn’t the only oddity that comes from using this motherboard. To make room for 2 CPUs, this motherboard is pretty huge, and it doesn’t fit in standard full tower cases. Full Tower cases have room for ATX form factor boards. There are some larger cases, known as E-ATX. This stands for Extended ATX, and generally these have room for larger motherboards, but it turns out that E-ATX isn’t an officially recognized and standardized form factor. E-ATX actually refers to anything between ATX and server motherboards. Importing a case large enough for a CEB form factor board was our original plan, but with that deadline, it was out of the question. We’re not sure how we got so lucky with sourcing these obscure parts, but we got pretty much the only non-server case in the country large enough for this kind of motherboard, and it is humungous.
It has tempered glass panels the size of a small coffee table, and the box tells you that lifting it is a 2-person job. Normal cases have space for 2 or 3 fans – this one can fit 14. As if those stats aren’t enough, not only can this fit in a dual CPU motherboard, there’s still space for an additional Mini-ITX motherboard, as well as its own power supply. We mentioned that the dual CPU system does not prioritize speed, but this case leaves room to cover that weakness in the future with an overclocked i9 build, for example. That’s not important right now, but we expect that this will happen in the future.
The kind of programs that this machine will be used for aren’t heavily dependent on the GPU, so we went with the dependable GTX1650.
The client here isn’t currently planning on using any software that needs serious GPU power, but it is a consideration in the future, much like the i9 system we mentioned above.
This brings us to the final hurdle, the coolers. With our lead technician having a background in competitive overclocking, we know how important cooling is. In our opinion, it’s one of the most commonly overlooked areas when it comes to high performance PCs. We’ve spent a lot of time with research and development to find the optimal coolers for our workstations, but of course, these coolers do not support socket LGA3647. With the CPUs themselves being so uncommon in South Africa, specialised coolers are basically non-existent here. We had exactly one option available due to the time constraint, and that was to go with a pair of pretty small Intel coolers. If only we’d had enough time to source and install a pair of Noctua coolers. Wouldn’t that be exciting? (Spoilers, guess what we did). Small heatsinks don’t have a lot of surface area to spread out the heat, and small fans need to spin really fast to get the kind of air pressure you’d expect from 120mm or 140mm fans. What this means is that adequately cooling this computer at full load results in a loud, high pitched whine.
Final Thoughts… Or are they?
As loud as it was though, this PC had become one of the most powerful single units in the country. With so many threads that you can’t even fit the list of temperatures of each core on one screen, this machine ate up benchmarks in no time at all. Tests that would take a powerful Core i7 over 40 seconds would be completed by this computer in less than 9. Remember when we said earlier that our client’s old dual-Xeon system was able to complete an iteration of his aerodynamic testing in 20 seconds? This one does it in 4 seconds.
After shipping it off to the client, we weren’t surprised that he was just as impressed as we were about the performance.
It’s unfortunate that due to the tight time constraints, we had to settle on CPU coolers that we weren’t happy with, but oh well. PCs built on a deadline are going to have some (temporary) compromises, and if the only one on something this powerful is that it can get pretty noisy, that’s quite a deal. Plus, it’s a work computer, it’s meant to get the job done, and sound has no effect on the PC’s performance. Any manufacturer would have been extremely proud to have put together something like that.
But this was just round 1 for this PC. We’re Modena Computers – if you want a PC that isn’t flawless, go buy a Dell or HP.