ECS/EKS: The ocean belongs to someone else.
ECS/EKS: The ocean belongs to someone else.
Bro The RAID Fuckin Sucks
I don’t have to, I watched Planet of the Apes
That surgeon general’s warning sent me into a giggle fit.
No. Symlinks and hardlinks are two approaches to creating a “pointer to a file.” They are quite different in implementation, but at the high level:
In both cases, the only additional data used is the metadata used for the link itself. The contents of the file on disk are not copied.
This is neat but the selling point for me with the Pebble is the e-ink display. If repebble fails though, my next watch will be a Pine. Hopefully my Versa 2 holds on for a bit longer 🤞
I don’t necessarily agree with it, but there’s the third option of just disabling SELinux and removing the frustration entirely.
…but you have to get whatever it is you’re transporting to the moon first
No, but you’ll have much more overhead. I have a VM that hosts all Docker deployments which don’t need much disk space (most of them)
This is a big point. One of the key advantages of docker is the layering and the fact that you can build up a pretty sizeable stack of isolated services based on the same set of core OS layers, which means significant disk space savings.
Sure, 200-700MB for a stack of core layers seems small but multiply that by a lot of containers and it adds up.
That’s not how golf works but I like where your head is at.
I think you mean aman
zing.
I’ll see myself out.
What’s blowing my mind about this entire thread is the “rewrite application to support RHEL9” thing I keep seeing. What the fuck applications are y’all running that are so tightly bound to the OS that they can’t handle library and/or kernel updates?
deleted by creator
Ultimately it’s a matter of personal choice and risk tolerance.
The Z1 will be simpler and have larger capacity, but if you have a drive fail you’ll need to quickly get it replaced or risk having to rebuild/restore if the mirror drive follows the first one to the grave.
Your Z2 setup right now can have two drives fail and still be online, and having a wider spread of power-on hours is usually a good thing in terms of failure probability.
I manage a large (14,000±) number of on-site RAID1 arrays in various environments and there is definitely a trend for drives shipped at the same time to fail at roughly the same time. It’s common enough that we often intentionally swap drives out before shipping a new unit to the customer site.
On my homelab, I’m much more tolerant of risk since I have trust in my 3-2-1 backup solution and if my NAS goes down it’s not going to substantially affect anything while I wait for a drive replacement.
Please do not the arch viles.
🌍👨🚀🔫👨🚀
One of its studies, which was carried out in collaboration with the Charité-Universitätsmedizin Hospital in Berlin, found that breast cancer survivors who took part in the study experienced mental and physical arousal when they used a sex toy.
Filed under “You don’t say?”
how do I delete someone else’s comment?
This is from last year.