A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 1 Post
  • 436 Comments
Joined 7 months ago
cake
Cake day: June 25th, 2024

help-circle
  • I think these tabs are meant for experts who know how to interpret a full log. Seems to me like Virostotal uses Acrobat Reader or something to open the files. I’m not an expert on what Acrobat is supposed to do once it runs. Sure, it’s going to do some system calls as every software does. And there is something with internet URLs. Could be some phishink link detection or URL prefetching, that is either part of Acrobat or Virustotal? And Acrobat Reader seems to be calling home to check for updates. That triggers the “low” IDS rule. Everything else is pretty much “NOT FOUND” or “INFO” and tells the story of how Acrobat Reader operates. None of it is flagged or indicated in red text.

    I’d treat those PDFs like any other one. Don’t just click on any random link in them, and if the PDF contains a form, don’t enter your private details and submit them unless you’ve verified where that form sends them to. But I doubt that’s happening here.



  • Lol. Yeah I get it. Though I still think the rich companies dictate a lot of things. They do a lot of lobbying and paying people to make sure it’s not them who funds the majority of the country, they choose how much you pay for medication and everyday items, they choose to spy on everyone on the internet. Make you buy things you don’t need, make housing prices subject to speculation. Make everyone addicted to their phone and spend like several hours a day with it. Separate society into filter bubbles. I think a lot of these things aren’t liked by the people. Or are extremely unhealthy. Yet, they are a thing and never change. I think because some people will this into existance. Sure, they’re far from being almighty. But it’s enough control they have over everyone already.

    And I think as they can use the internet as a tool for their interests (which had ultimately been invented to connect people), they could as well do the same with AI. I mean they train those models and choose in which ways they’re biased. What the can and can not talk about. If that’s paired with the surveillance tech, that’s already inside of each smart TV, smart appliance or Alexa… It’ll be kind of a dystopian scifi movie where someone else watches your steps all day, uses that to manipulate people… some kind of puppet master whom the bots really work for.

    I’m really unsure. Sure, almost everything can be hacked. But does that really have an effect on the broader picture? Everytime I see some major hack, the next day it’s business as usual and everything keeps working as it used to.









  • That’s kind of what happens when somebody re-uses already assigned namespaces for a different purpose. Same with other domains, or if you mess with IP addresses or MAC addresses. The internet is filled with RFCs and old standards that need to be factored in. And I don’t really see Google at fault here. Seems they’ve implemented this to specification. So technically they’re “right”. Question is: Is the RFC any good? Or do we have any other RFCs contradicting it? Usually these things are well-written. If everything is okay, it’s the network administrators fault for configuring something wrong… I’m not saying that’s bad… It’s just that computers and the internet are very complicated. And sometimes you’re not aware of all the consequences of the technical debt… And we have a lot of technical debt. Still, I don’t see any way around implementing a technology and an RFC to specification. We’d run into far worse issues if everyone were to do random things because they think they know something better. It has to be predictable and a specification has to be followed to the letter. Or the specification has to go altogether.

    Issue here is that second “may” clause. That should be prohibited from the beginning, because it just causes issues like this. That’s kind of what Google is doing now, though. If you ask me, they probably wrote that paragraph because it’s default behaviour anyways (to look up previously unknown TLDs via DNS). And they can’t really prevent that. But that’s what ultimately causes issues. So they wrote that warning. Only proper solution is to be strict and break it intentionally, so no-one gets the idea to re-use .local… But judging from your post, that hasn’t happened until now.

    Linux, MacOS etc are also technically “right” if they choose to adopt that “may” clause. It just leads to the consequences lined out in the sentence. They’re going to confuse users.


  • Any DNS query for a name ending with “.local.” MUST be sent to the
    mDNS IPv4 link-local multicast […]
    Implementers MAY choose to look up such names concurrently via other
    mechanisms (e.g., Unicast DNS) and coalesce the results in some
    fashion. Implementers choosing to do this should be aware of the
    potential for user confusion when a given name can produce different
    results depending on external network conditions […]

    The RFC warns about these exact issues. You MAY do something else, but then the blame is on you…




  • Well if it can replace senior software engineers… Wouldn’t it also be able to do almost all of the other jobs? Or are you referring to some specific future where AI advances massively, but robotics does not and handymen are still safe?

    I’d say if all humans are unemployed, society would change massively. We can’t really tell how that’d work. But if machines / AI do all jobs, get food on the table… I don’t really know what other people would be doing. I think I’d relax and pursue a few hobbies and interests. Or it’d be some dystopia where humankind is oppressed by the machines and I’d fight for the resistance.

    But regardless… In a world like that, money wouldn’t work the way it does now. Neither would salaries for labor mean anything.


  • Hmmh. I mean for me it’s kind of the other way round. I’ve started with GIMP because it was free. Never saw any reason to buy Adobe software (or others) and then also invest the time to learn how to use it. I roughly know where to find things in GIMP and don’t know any other workflow. But I don’t do much photo editing, so I wouldn’t really know. And even as an amateur the nagivation in GIMP often feels cumbersome, and sometimes you fail to grasp how you’re supposed to do something. I always hoped we’d invent another big photo editing suite as Free Software. Or GIMP would do a complete overhaul. But it is how it is. I mean I don’t really care. But just because I don’t need a lot of photo editing in my life 😉 It’s likely an entirely different story for a lot of other people, and I can relate to that.