Yep, the problem was that docker started before the NFS mount. Adding the dependency to my systemd docker unit did the trick!
Yep, the problem was that docker started before the NFS mount. Adding the dependency to my systemd docker unit did the trick!
isn’t it an annoyance having to connect to your home network all the time?
It’s less annoying than the gnawing fear that my network might be an easy target for attackers.
My indie theater has a bright blue pixel in their main screening room.
I complained about it when I first noticed, which was more than a year ago. It was still there the last time I went.
How to make a suckless.org contributor cry
Surely this could be good, right?
If celebrities need to be accessible to their biggest fans, maybe it would induce them to leave the birdsite? And if this is as big a migration as the article suggests, it has the potential to snowball in network effects, giving other influential users one less reason to feel chained to a dumpster fire.
I worked on a product that was only allowed to return 200 OK, no matter what.
Apparently some early and wealthy customer was too lazy to check error codes in the response, so we had to return 200 or else their site broke. Then we’d get emails from other customers complaining that our response codes were wrong.
Sounds like a pretty shit security feature. I wonder if it would keep the door open if I were to print a photo of the owner and wear it like a mask.
When I watched those episodes for the first time, my reaction was: “So the Silicon Valley billionaire would just let the poor people use his network to get their messages out?”
The rich were portrayed as apathetic, instead of active participants.
Yes, OP I highly recommend a GL.iNet device. It’s pocket sized and always does the job.
It’s also great for shitty wifi that tries to limit how many devices you can connect. The router will appear as one MAC and then all your other devices can route traffic through it.
If it’s just for movies, consider an Intel ARC A380.
Small, cheap, great transcoding performance, and its drivers should be shipped by default with most distros. It really can’t do games though.
As someone who has owned enterprise servers for self-hosting, I agree with the previous comment that you should avoid owning one if you can. They might be cheap, but your longterm ownership costs are going to be higher. That’s because as the server breaks down, you’ll be competing with other people for a dwindling supply of compatible parts. Unlike consumer PCs, server hardware is incredibly vendor locked. Hell, my last Proliant would keep the fans ramped at 100% because I installed a HDD that the BIOS didn’t like. This was after I spent weeks tracking down a disk that would at least be recognized, and the only drives I could find were already heavily used.
My latest server is built with consumer parts fit into a 2U rack case, and I sleep so much easier knowing I can replace any of the parts myself with brand new alternatives.
Plus as others have said, a 1U can be really loud. I don’t care about the sound of my gaming computer, but that poweredge was so obnoxious that despite being in the basement, I had to smother it with blankets just so the fans didn’t annoy me when I was watching TV upstairs. I still have a 1U Dell Poweredge, but I specifically sought out the generation that still let you hack the fan speeds in IPMI. From all my research, no such hack exists for the Proliant line.
On Linux, I run fwupdmgr
to periodically check for firmware updates. Not every manufacturer supports it yet, but I’ve had good results with a few laptops. Not sure if it supports BIOS.
Also though, I generally try to leave my BIOS alone if everything is working fine. Unless I hear of a reason to update, I’d rather stay on a stable version.
I don’t actually like those big recliners they have in movie theaters. They’re too plush to be comfortable (if that makes sense) and the droning whine of the electric motor is annoying when someone adjusts their seat during a screening.
Assuming that the disk is of identical (or greater) capacity to the one being replaced, you can run btrfs replace
.
https://wiki.tnonline.net/w/Btrfs/Replacing_a_disk#Replacing_with_equal_sized_or_a_larger_disk
I’d recommend BTRFS in RAID1 over hardware or mdadm raid. You get FS snapshotting as a feature, which would be nice before running a system update.
For disk drives, I’d recommend new if you can afford them. You should look into shucking: It’s where you buy an external drive and then remove (shuck) the HDD from inside. You can get enterprise grade disks for cheaper than buying that same disk on its own. The website https://shucks.top tracks the price of various disk drives, letting you know when there are good deals.
“Remember: No PID”
Haven’t used it yet, but I’ve been researching authentik for my own SSO.
BTRFS should be stable in the case of power loss. That is to say, it ought to recover to a valid state. I believe the only unstable modes are RAID 5/6.
I’d recommend BTRFS in RAID1 mode over mdadm RAID1 + ext4. You get checksumming and scrubs to detect drive failures and data corruptions. You also have snapshotting, in case you’re prone to the occasional fat-fingered rm -rf
.
For backup, maybe a blu-ray drive? I think you would want something that can withstand the salty environment, and maybe resist water. Thing is, even with BDXL discs, you only get a capacity of 100GiB each, so that’s a lot of disks.
What about an offsite backup? Your media library could live ashore (in a server at a friend’s house). You issue commands from your boat to download media, and then sync those files to your boat when it’s done. If you really need to recover from the backup, have your friend clone a disk and mail it to you.
Do you even need a backup? Would data redundancy be enough? Sure if your boat catches fire and sinks, your movies are gone, but that’s probably the least of your problems. If you just want to make sure that the salt and water doesn’t destroy your data, how about:
This would probably be cheapest and have the least complexity.
If you’re doing it from scratch, I’d recommend starting with a filesystem that has parity checks and filesystem scrubs built in: eg BTRFS or ZFS.
The benefit of something like BRTFS is that you can always add disks down the line and turn it into a RAID cluster with a couple commands.