Building a Homelab Computer

Hello, I currently run a homelab on an old Dell Optiplex. I am looking to upgrade this to a larger one with a strong GPU. The intention is to use this effectively both as a gaming PC as well as one to run local LLMs and experiment with it. Do you have recommendations on builds I should use as a base to consider for this? I also recently got a quest 3 - so I would like the be able to connect it to the computer using an oculink cable if possible. I don't have a fixed budget per-se - but would like to be on the sweet spot of lifetime value.

My current setup is headless - what monitors or specs would you suggest?

Do these parts go on sale often? When is the ideal time to consider buying them?

Thanks.

Comments

Search through all the comments in this post.
  • +5

    What are you going to run apart from occasional (small-ish) LLM? I'd recommend sticking to separate hardware for gaming and homelab purposes. Reasons? Personally I think:
    - Better separation if hardware or software breaks, you don't lose EVERYTHING.
    - Most likely better power usage. Running a beefy PC (even when things are pretty efficient these days) as a server all the time will most likely consume more energy in the long run.
    - Managing a homelab using VMs, killing the OS if you did the wrong thing and setting it all up again would be less painful than also install all your games and configs as a personal computer.
    - You can also (depending on the VM/VLAN setup) better or easily isolate things, but that entirely depends on you networking ability.

    Again, this is a personal view of how I've done it at home, and I might be wrong :)

    I have a mini PC running an i5 8500t with 24gb ram I think, and then a gaming/personal PC with a Ryzen 5 7600 + 4080s, which I use remotely with Sunshine/Moonlight, it just works for my needs.

    • OP is vague, but I guess he is looking at the high price of GPUs, and wants to get dual use.

      It should be possible to keep the Optiplex as a home server, (or upgrade - how old?) but run Ollama server on a Windows gaming PC for faster inference using the GPU.

      • That's exactly my setup, tho as mentioned on another comment, it's easier and better to just fork out some subscription for better model access. If privacy is the concern then I understand, but it's still not a good idea in terms of performance or cost. ¯_(ツ)_/¯

  • sorry what do you mean a homelab and LLM?

  • if your goal is to run a somewhat decent LLM, then you pretty much have to aim for those 30+B or even 70B models.

    4-bit quantized 70B model would require 42GB vram, and you need spare headroom for context etc.

    unfortunately it's going to be way much more than buying AI subscriptions.

    unless your goal is to run 7B ish model for automation, personal assistant sort of jobs.

    • "Decent" depends on you application. A 2GB model can do many things. But no home version will replace a frontier model.

Login or Join to leave a comment