• 0 Posts
  • 111 Comments
Joined 2 years ago
cake
Cake day: July 11th, 2023

help-circle








  • Half of the ways people were getting around guardrails in the early chatgpt models was berating the AI into doing what they wanted

    I thought the process of getting around guardrails was an increasingly complicated series of ways of getting it to pretend to be someone else that doesn’t have guardrails and then answering as though it’s that character.





  • If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.

    Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.

    But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

    If you’re sufficiently detached from reality, nearly anything validates the psychosis.



  • Schadrach@lemmy.sdf.org
    cake
    tolinuxmemes@lemmy.worldLinux as the true Trojan!
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    Really it’s actually capitalism that supposes people are too dumb to make their own choices or know how a business is run, and thus shouldn’t have say over company choices.

    Really it’s actually that businesses with that structure tend to perform better in a market economy, because no one forces businesses to be started as “dictatorships run by bosses that effectively have unilateral control over all choices of the company” other than the people starting that business themselves. You can literally start a business organized as a co-op (which by your definitions is fundamentally a socialist or communist entity) - there’s nothing preventing that from being the organizing structure. The complaint instead tends to be that no one is forcing existing successful businesses to change their structure and that a new co-op has to compete in a market where non-co-op businesses also operate.

    If co-ops were a generally more effective model, you’d expect them to be more numerous and more influential. And they do alright for themselves in some spaces. For example in the US many of the biggest co-ops are agricultural.


  • To be clear, when you say “seeded from” you mean an image that was analyzed as part of building the image classifying statistical model that is then essentially running reverse to produce images, yes?

    And you are arguing that every image analyzed to calculate the weights on that model is in a meaningful way contained in every image it generated?

    I’m trying to nail down exactly what you mean when you say “seeded by.”


  • OK, so this is just the general anti-AI image generation argument where you believe any image generated is in some meaningful way a copy of every image analyzed to produce the statistical model that eventually generated it?

    I’m surprised you’re going the CSAM route with this and not just arguing that any AI generated sexually explicit image of a woman is nonconsensual porn of literally every woman who has ever posted a photo on social media.


  • was seeded with the face of a 15yr old and that they really are 15 for all intents and purposes.

    That’s…not how AI image generation works? AI image generation isn’t just building a collage from random images in a database - the model doesn’t have a database of images within it at all - it just has a bunch of statistical weightings and net configuration that are essentially a statistical model for classifying images, being told to produce whatever inputs maximize an output resembling the prompt, starting from a seed. It’s not “seeded with an image of a 15 year old”, it’s seeded with white noise and basically asked to show how that white noise looks like (in this case) “woman porn miniskirt”, then repeat a few times until the resulting image is stable.

    Unless you’re arguing that somewhere in the millions of images tagged “woman” being analyzed to build that statistical model is probably at least one person under 18, and that any image of “woman” generated by such a model is necessarily underage because the weightings were impacted however slightly by that image or images, in which case you could also argue that all drawn images of humans are underage because whoever drew it has probably seen a child at some point and therefore everything they draw is tainted by having been exposed to children ever.



  • Schadrach@lemmy.sdf.org
    cake
    toTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    A more apt comparison would be people who go out of their way to hurt animals.

    Is it? That person is going out of their way to do actual violence. It feels like arguing someone watching a slasher movie is more likely to make them go commit murder is a much closer analogy to someone watching a cartoon of a child engaged in sexual activity or w/e being more likely to make them molest a real kid.

    We could make it a video game about molesting kids and Postal or Hatred as our points of comparison if it would help. I’m sure someone somewhere has made such a game, and I’m absolutely sure you’d consider COD for “fun and escapism” and someone playing that sort of game is doing so “in bad faith” despite both playing a simulation of something that is definitely illegal and the core of the argument being that one causes the person to want to the illegal thing more and the other does not.