• 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • Obviously this is all stupid and you’ll find problems anywhere you choose to look.

    The problem I’m finding is this, if Facebook truly is betting on AI becoming better as a way to encourage growth then why are they further poisoning their own datasets? Like ok, even if you exclude everything your own bots say from your training data, which you could probably do since you know who they are, this is still encouraging more AI slop on the platform. You don’t know how much of the “engagement” your driving (which they are likely just turning around and feeding back into the AI training set) is actually human, AI grifter, or someone poisoning the well by making your AIs talk to themselves. If you actually cared to make your AI better, then you can’t use any of the responses to your bots as most of them will be of dubious providence at best.

    Personally I’m rooting on the coming Hapsburg-AI issue so I don’t really have that much of a problem with Facebook deciding more poison is a brilliant business move. But uh… seems real dumb if your actually interested in having an actually functional LLM.



  • Is it the tech? Or is it media literacy?

    I’ve messed around with AI on a lark, but would never dream of using it on anything important. I feel like it’s pretty common knowledge that AI will just make shit up if it wants to, so even when I’m just playing around with it I take everything it says with a heavy grain of salt.

    I think ease of use is definitely a component of it, but in reading your message I can’t help but wonder if the problem instead lies in critical engagement. Can they read something and actively discern whether the source is to be trusted? Or are they simply reading what is put in front of them then turning around to you and saying “well, this is what the magic box says. I don’t know what to tell you.”.