2006 for me. Work varies depending on the company and position, but I mostly find ways around it.
2006 for me. Work varies depending on the company and position, but I mostly find ways around it.
Sounds settings have at least 3 places where they can be set in Windows, and the places don’t necessarily implement all of the functionality of the others.
Windows settings are a mess.
Basically. Using windows after spending a decade plus with Gnome and macOS is cumbersome.
The *nix desktop space.
Year of desktop Linux is when? 😆
Aren’t setters and getters discouraged in Python?
I remember reading something like, “This isn’t C++ , and Python doesn’t have private vars. Just set the var directly.”
IMF stopping in here. Good job France! Couldn’t have done it better!
I’ve gotta go sign up some more countries for predatory loans and strangle some economies. Tootles!
Exactly. He “stole” millions from companies stealing billions, and thus was eaten.
They probably have turned over logs because legal persuasion, and it sounds like they anticipated that. Moxie has been around the cypherpunk scene for a while, so they knew what they’re doing.
Plus the paper on the double ratchet algorithm is out there. https://en.m.wikipedia.org/wiki/Double_Ratchet_Algorithm
Signal uses Google Cloud Platform for their servers, for one.
Then I think it’s something to do with metadata.
Sites are much more contained now. Is much more like a profile per site.
Container tabs are still a thing in FF. This is based on that work, if I remember correctly.
😂 I didn’t notice the last time they had a nation wide outage either.
While at the sametime being a gross person.
Yeah, it was SourceForge and SVN.
For real. Numbers are strings? Yeah, okay.
YAML is better. UCL porn though. 🥵 Things are getting niche when UCL shows up.
That’s what I decided.
It will be more informative, and I have lots of options for hosting.
That’s not a bad strategy. Just gotta add some leftist politics to the mix.
There was (is?) the yacy project which used a distributed index, and the individual nodes would contribute to the index.
A hybrid of original Yahoo! and Google is probably the best option. Sites submit themselves, they get reviewed, and an algorithm catalogs the contents. So curation and automatic indexing together.
This would be great news if it would happen, and especially if it brings back allowing the engine to be embedded.
Postgres is better, but embedded Postgres isn’t something that’s going to happen.