

Sounds like a them problem then.
Sounds like a them problem then.
podman exists and doesn’t force root…
It’s yours, no issues trusting a public instance with your searches. Pages full of settings to tweak as you like. Less problems with an algorithm ‘helping’ you. It averages searches over multiple search engines that you choose, you can set up your own (or a curated) block list of crappy AI slop sites, don’t like fandom.com or something, gone. Manage your own bangs, e.g. !aa for annas-archive. Pipe it through a VPN with gluetun for better isolation. If you have your head around docker already it’s more like half an hour to set up, so why not?
Can hook it up to perplexica and a local LLM for a fully local AI search that you define, use it as a MCP server, do deep research with it…
“I reject your reality & substitute my own”
I’m fine with Adam Savage’s version, it’s the
“I reject reality & substitute some crap someone (or something) told me”
that all too many people do nowadays, and it shits me to tears…
know truth from fiction.
You jest, but…
You are aware that Netflix et.al. put compression on their streams (usually quite a bit in regards to bitrate) ? It is often the case that BluRay rips etc. are available better on the high seas…
While I generally agree and consider this insightful, it behooves us to remember the (actual, 1930s) Nazis did it with newspapers, radio and rallies (… in a cave, with a box of scraps).
I replied to @muntedcrocodile@lemm.ee and understood the question like “Is distrobox as secure as QubesOS?”, which I replied with “No”.
Ahh, fair cop. Good point on Secureblue, but my threat model doesn’t take me there.
Eh, it’s fedora under the hood with SELinux enabled, and immutable, better than most security wise, I didn’t say much more.
Bazzite is the better distro because you install things in a distrobox. Muck around, break things in there, but your main distro stays safe, secure and stable.
Perhaps not saved, but I’d venture the most significant nail in the coffin of the scientific publishing mafia so far, pursued with integrity and honor. The rise of open publishing that followed is very telling, and in my mind directly attributable to Alexandra’s work and it’s popularity, they know they need to adapt or (probably and) die.
Still need to work on the publish or perish mentality, getting negative results published, and getting corporate propaganda out of the mix, to name a few.
You can cycle the smaller drives to cold backup, that’s not a waste. You do have backups, which RAID is not, right?
Sure, works fine for inference with tensor parallelism, USB4 / thunderbolt 4/5 is a better (40Gbit+ and already there) bet than ethernet (see distributed-llama). Trash for training / fine tuning, that needs higher inter GPU speed, or better a bigger GPU VRAM.
Seems like data integrity is your highest priority, and you’re doing pretty well, the next step is keeping a copy offsite. It’s the 3-2-1 backup strategy, 3 copies, 2 media (used to mean CDs etc but now think offline drives) 1 offsite (in case of fire, meteor strike etc), so look to that, stash a copy at a friends or something.
In your case I’d look at getting some online storage to fill the offsite role while you’re overseas (paid probably, but a year of 1 or 2 Tb is quite reasonable) leaving you with no pressure on the selfhosting side, just tailscale in, muck around and have fun, and if something breaks, no harm done, data safe.
I’ve done it for what seems like forever and I’d still be worried about leaving a system out of physical control for any extended period of time, at the very least having someone to reboot it if connectivity or power fails will be invaluable, but talking them through a broken update is another thing entirely, and you shouldn’t make that a critical necessity, too much stress.
I run a gluetun docker (actually two, one local and one through Singapore) clientside which is generally regarded as pretty damn bulletproof kill switch wise. The arr stack etc uses this network exclusively. This means I can use foxyproxy to switch my browser up on the fly, bind things to tun0/tun1 etc, and still have direct connections as needed, it’s pretty slick.
I have exactly this (AM4, 7800XT, 3440x1440 monitor) running bazzite. Almost every game I have maxes 165Hz, works great for LLM inference too, really the nutso expensive stuff is only necessary for 4K+, which I find diminishing returns at present, LLM training (rent a GPU instead), and probably modern VR. Just to let you know you’re barking up the right tree. :)
Oh, and the 7800XT idles / youtubes ~ 14-20W, 7 with the monitor off. I’m actually using it as a backup NAS / home server in down time, system pulls ~40-45W at the wall and I haven’t even gone deep into power saving as it’s a placeholder for a new homelab build that’s underway.
The old adage is never use v x.0 of anything, which I’d expect to go double for data integrity. Is there any particular reason ZFS gets a pass here (speaking as someone who really wants this feature). TrueNAS isn’t merging it for a couple of months yet, I believe.
Yup (although minutes seems long and depending on usage weekly might be fine). You can also combine it with updates which require going down anyway.
Basically, you want to shut down the database before backing up. Otherwise, your backup might be mid-transaction, i.e. broken. If it’s docker you can just docker-compose down it, backup, and then docker-compose up, or equivalent.
No more so than using any search engine directly, it’s a nice to have. Don’t let perfect be the enemy of good enough.
By the time you’ve investigated it, you could have stood up your own instance…