post
This commit is contained in:
parent
98db6c1ba3
commit
9422ba6ff7
1 changed files with 85 additions and 0 deletions
85
content/why_epyc.md
Normal file
85
content/why_epyc.md
Normal file
|
@ -0,0 +1,85 @@
|
|||
+++
|
||||
title = "Why you don't want to what I do"
|
||||
date = "2024-01-28"
|
||||
[taxonomies]
|
||||
tags = ["linux", "ramble"]
|
||||
+++
|
||||
|
||||
# How I got here
|
||||
|
||||
So I see a lot of confusion from people that seem to think that they should also
|
||||
get a system like mine, or otherwise replicate my software setup on their
|
||||
machines. I figured I should probably explain more of why I have this setup, and
|
||||
probably why you don't want what I have as not understanding how things work are
|
||||
likely to lead you into many pain points that have lead me directly here. I
|
||||
should probably start with the list of things that I don't need, but people seem
|
||||
to think is a massive gain.
|
||||
|
||||
### CPU
|
||||
|
||||
I do *not* need 32 cores for my system. I rarely use more than 2% of this CPU
|
||||
even while running several VM's and sometimes do compiles on it. Even doing 10
|
||||
transcodes on the CPU at once still doesn't fully use it, and if transcodes are
|
||||
something you need, I'd likely recommend a GPU to encode it. This includes
|
||||
integrated GPU's on consumer CPU's.
|
||||
|
||||
### RAM
|
||||
|
||||
RAM is something that will depend massively on your workload. I personally use
|
||||
ZFS, and many TB of storage on mechanical drives. Feeding RAM to that can help
|
||||
performance substantially, and this is where most of my RAM needs to be. I run a
|
||||
pretty large, by most people in the affordable homelab space, software stack,
|
||||
and other than ZFS ARC, most of my actual services will max out about 8GB of RAM
|
||||
needed, 16GB if I want to push it to "just testing" levels. You probably don't
|
||||
need 128GB of RAM.
|
||||
|
||||
### PCIE
|
||||
|
||||
This was one of the biggest reasons that I went with the Epyc platform. This
|
||||
platform supports bifurcation allowing me to put more SSD's into PCIE slots,
|
||||
along with having many lanes to even enable me to run more devices. Many
|
||||
consumer platforms may appear to have 2/3 PCIE 16x, and 2/3 PCIE 1x, but don't
|
||||
actually have enough lanes to drive that. If you try to use 2 16x devices, they
|
||||
will usually drop to 8x, and some boards are only 8x on the second slot always.
|
||||
If you don't plan on much expansion, this is likely not something you care
|
||||
about. I needed the ability to run a few SAS cards for storage, and currently
|
||||
run PCIE networking at 10 gig speeds, and want the ability to do 40/100 gig at
|
||||
some point in the future. Between having proper passthrough support, as well as
|
||||
a massive amount of lanes to actually get devices running at full speed, I can
|
||||
do what I wouldn't ever attempt on most consumer platforms
|
||||
|
||||
### VM's
|
||||
|
||||
VM's are generally something useful in the homelab, but putting your NAS/SAN
|
||||
into a VM has serious implications that you should understand before trying
|
||||
that. Most consumer platforms have pretty bad support for things like PCIE
|
||||
passthrough, be it just having general bugs, or not separating IOMMU groups,
|
||||
leading you to being unable to pass through in general. If you choose to run
|
||||
ZFS, you should give it direct access to the drive controller to help it detect
|
||||
and correct errors. If you have a few TB of throw away data, do as you please,
|
||||
but if you plan to have significant data stored, and/or you care about that data
|
||||
at all, I can't recommend using USB devices, or otherwise giving ZFS indirect
|
||||
access to the storage as you will eventually find issues, and it may be too
|
||||
late. If rebuilding all of your storage is not an issue, do as you see fit, but
|
||||
don't say you weren't warned.
|
||||
|
||||
# What you probably should do
|
||||
|
||||
If you aren't planning on getting a board that has all of the server features,
|
||||
compressing everything into one machine is likely going to cause you more pain
|
||||
than it will help. I'd also not recommend running storage in a VM unless you
|
||||
understand the implications, and plan for it. These days, I highly recommend
|
||||
TrueNAS Scale if you don't personally want to manage the system all at a command
|
||||
line, and even if you do, some of the reporting and automation features are just
|
||||
nice to have, even as someone that managed my storage on headless Alpine for
|
||||
many years. It can also host some basic VM's and "Apps" to replace most people's
|
||||
docker needs. If you don't know exactly why you don't want it at a technical
|
||||
level, it's probably a good option for you.
|
||||
|
||||
|
||||
# Further questions?
|
||||
|
||||
If that wasn't a complete enough explication of why my system is likely
|
||||
overkill, and why I don't recommend what works for me, feel free to reach out to
|
||||
me on discord as kdb424, or via email at blog@kdb424.xyz as I love questions and
|
||||
am happy to help!
|
Loading…
Add table
Reference in a new issue