

Power scaling for these old CPU is not great though. Mine is slightly newer and on idle it still uses 50% of the TDP.
Power scaling for these old CPU is not great though. Mine is slightly newer and on idle it still uses 50% of the TDP.
Xeon E5-2670, with 115W TDP, which means 2x115=230W for the processor alone. with 8 ram modules @ ~3W each, it’ll going to guzzle ~250W when under some loads, while screaming like a jet engine. Assuming $0.12/kwh, that’s $262.8 per year for electricity alone.
Would be great if you have an isolated server room to contain the noise and cheap electricity, but more modern workstation should use at least 1/4 of electricity or even less.
Google Reader was the best. Not sure why Google killed it, but it was really good at both content discovery and keeping up with sites you’re interested in. I tried several alternatives but nothing came close, so I gave up and hung out more on forums / link aggregators like slashdot, hacker news, reddit and now lemmy for content discovery. I’m also interested to hear what others use.
Wow, I never thought of using usbip to work around wayland issue with kvm apps. Sounds useful as a last resort to get kvm working.
Yes, but autossh will automatically try to reestablish connection when its down, which is perfect for servers behind cgnat that you can’t physically access. Basically setup and forget kind of app.
If this server is running Linux, you can use autossh to forward some ports in another server. In this example, they only use it to forward ssh port, but it can be used to forward any port you want: https://www.jeffgeerling.com/blog/2022/ssh-and-http-raspberry-pi-behind-cg-nat
By “remotely accessible”, do you mean remotely accessible to everyone or just you? If it’s just you, then you don’t need to setup a reverse proxy. You can use your router as a vpn gateway (assuming you have a static ip address) or you can use tailscale or zerotier.
If you want to make your services remotely accessible to everyone without using a vpn, then you’ll need to expose them to the world somehow. How to do that depends on whether you have a static ip address, or behind a CGNAT. If you have a static ip, you can route port 80 and 443 to your load balancer (e.g. nginx proxy manager), which works best if you have your own domain name so you can map each service to their own subdomain in the load balancer. If you’re behind a GCNAT, you’re going to need an external server/vps to route traffics to its port 80 and 443 into your home network, essentially granting you a static ip address.
It’s usually used for storage servers these days. ZFS is most stable there.
Not sure if it’s possible on the latest version of gnome anymore. Maybe try turning off lock screen notification because those sleep warning notification would often shows up when the screen is already locked?
I kinda assumed anyone who know how to install Linux on their laptop wouldn’t have too much problem figuring out how VM works
Try running those adobe apps on a windows virtual machine. Use KVM with virt-managet instead of virtualbox. If the performance is acceptable for you, now you can use Linux as the primary os and only use the VM for adobe apps. VM boots faster too because you can just hit suspend and resume it again later.
GPU passthrough works pretty well these days, but anticheats will detect you running inside a VM. Evading anticheats detection is a separate issue unrelated with gpu passthrough, usually involves getting the vm to look like a real hardware as much as possible (e.g. using real mac address, hiding kvm hypervisor signature, etc). It’s quite a deep rabbit hole and I haven’t actually tried it.
Next: how do we know tailscale’s network hasn’t been backdoored?
I think you can send a SIGUSR1 signal to mumble process to tell it to reload the ssl certificate without actually restarting mumble’s process. You can use docker kill --signal="SIGUSR1" <container name or id>
, but then you still need to give your user access to docker group. Maybe you can setup a monthly cron on root user to run that command every months?
Note that rsync.net includes free 7 days daily snapshot. Also, the main advantage over backblaze b2 for me is you can just sync a whole folder full of small files instead of compressing them into an archive first prior to uploading to a b2 bucket. This means you can access individual files later without the need to download the whole archive.
I still use b2 to store long term backup archives though.
Aye. Docker on linux doesn’t involve any virtualization layer. What should the direct the installation setup be called? Custom setup?
I’m currently using nextcloud:26-apache
from here because some nextcloud apps I use is not compatible with v27 and v28 yet. The apache version is actually less hassle to use because nextcloud can generate .htaccess configuration dynamically by itself, unlike php-fpm version where you have to maintain your own nginx configuration. The php-fpm version is supposedly faster and scale better though, but chance that you won’t see that benefits unless your server handles a large amount of traffics.
People usually come here looking for advice on how to replace their dockerized nextcloud setup with a bare-metal setup. Now you came along presenting a solution to do the reverse! Bravo!
What do you guys think about putting the different components (webserver, php, redis, etc.) in separate containers like this, as compared to all in one?
I actually has a similar setup, but with nextcloud apache container instead of php-fpm, and in rke2
instead of docker compose.
For comparison, I run a thinkstation p300 with i7-4790 (TDP 84W) 24/7 and the power usage looks like this:
Even when idling this old processor still guzzles 45W. Certainly not as nice as GP’s that only use 10W during idle.