Mozilla has loads of projects, not just the browser. I doubt more than a 30 work exclusively on the engine nowadays.
Mozilla has loads of projects, not just the browser. I doubt more than a 30 work exclusively on the engine nowadays.
Andreas Kling, the founder and lead dev, has a massive love for Twinings tea and spent a few Dev logs working on improving their website with the end goal being ordering his tea from them :)
EDIT: There’s a fix. https://unpackerr.zip Automatically unzips these rar containers into coherent files for importing via sonarr/radarr. I suppose you can do this manually with tar if you’re brave.
It would be nice if people read the post and the project before randomly making assumptions such as implying the project started from scratch yesterday or its run by some amateurs, this is a 4 year old project! It’s founded by a former KHTML/Webkit developer for Apple!
Sure, but an individual website may use only a few of those standards. Ladybird devs will pick a website they like to use - Reddit, Twitter, Twinings tea, etc. and improve adherence to X or Y standards to make that one website look better. In turn, thousands of websites suddenly work perfectly, and many others work better than before.
Ladybird is largely conformant to the majority of HTML standards now. It’s about the edge cases (and where standards aren’t followed by websites) and performance. This isn’t a new project.
Ladybird was born from SerenityOS, which is a hobbyist unix-like (or POSIX compliant?) OS that simply aimed to do things “from the ground up”. It just happened that they needed to make a browser, and the response was to make one from scratch.
From there it seemed to have brought a lot of attention organically to the point where it can stand on its own, but originally it was never intended to be a “third browser engine” from its inception.
registry switch that’ll mysteriously reset itself. we’ve had this shit with countless windows configurations at work that our IT guy has to battle with on the regular.
270GB feels insane for the source code of a single organisation. Is there media assets or backups in there too?
EDIT: yep, multiple subsidiaries and slack Comms which could inflate it by a lot. we post a whole lot of uncompressed shit on our slack
With what money? SpaceX is the only company with any kind of steady revenue to its name and that’s only because the US government subsidised it
the last functional windows.
we do a bit of entrapment
I don’t (never played Xbox til the end of its lifecycle) what did they do? 👀
Not if you call it GNU/Linux 🤓☝️
The trick to writing a JavaScript web app is that first you consider literally any other technology to solve your problem and only then consider using javascript.
Rsync over FTP. i use it for a weekly nextcloud backup to a hetzner storage box
Shouldnt do so that bad. my raspberry pi 4b can do jellyfin and nextcloud without pushing 15W at full load.
x86 is inefficient, especially older models, but youll likely only push anything over 10W when actually streaming something that requires transcoding. Most of the time your home server is gonna sit idle or doing some tiny cron job that won’t really blast the CPU at all.
idk what resolution you use for streaming but my raspberry pi 4B runs plex at 1080p just fine as long as it isnt using x265/AV1 (but on jellyfin you might be able to use the Pi’s GPU for transcoding).
I use nextcloud too but it’s a tiny bit slower than I’d like, but that’s likely a wifi issue i think.
Literally any PC on Amazon for $200 CAD, then add your own SSD. I’d say 8GB of RAM but that’s just for cache, youll rarely go over 4 in general use.
That, or a raspberry pi 4B/5 which runs you about $150 once you get a case, power supply, powered USB dock for sticking SSDs into (just for safety since technically the pi’s USB ports cant handle certain SSDs power reqs.) and then stick SSDs into that.
Use dietpi (dietpi.com) for setting up your services and it’ll run nice and smooth for anything not H265, which might be annoying but Plex and possibly jellyfin let you transcode stuff in the background which is nice.
If i was stack overflow I would’ve transferred my backups to OpenAI weeks before the announcement for this very reason.
This is also assuming the LLMs weren’t already fed with scraped SO data years ago.
It’s a small act of rebellion but SO already has your data and they’ll do whatever they want with it, including mine.
Yeah I didn’t realise they were rar formats from how they show up on disk - Usually people name.their.torrents.like.this so it fucks up typical file name conventions.
I’ll keep that in mind too, thanks! Not using qbitmanage yet though I’ll have to look into that 👀