Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

  • Andy@slrpnk.net
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    edit-2
    11 months ago

    I think the interesting thing about this is that these LLMs are essentially like children: they don’t have the benefit of years and years of social training to learn our complex set of unspoken rules and exceptions.

    Race consciousness is such an ever-present element of our social interactions, and many of us have been habituated not to really notice it. So it’s totally understandable to me that LLMs reproduce our highly contradictory set of rules imperfectly.

    To be honest, I think that if we can set aside our tendency to understandably avoid these discussions because they’re usually instigated by racist trolls, there’s some weird and often unexamined social tendencies we can interrogate.

    I think it’s helpful to remind ourselves frequently that race is real like gender, but not like sex. Race exists because when people encountered new cultures, they invented a pseudoscience to create the concept of whiteness.

    Whiteness makes no sense. Who is white is highly subjective, and it’s always been associated with the dominant mainstream culture to which whiteness claims ownership. This means that you either buy into the racist falsehood that white culture is interchangeable with the default culture or it has no culture at all… Whiteness really exists only in opposition to perceived racial inferiority. Fundamentally, that’s all “white” means. It’s a weird anachronistic euphemism for, “Not racially inferior”.

    There are plenty of issues with our racial construction of blackness and the quality of being Asian and east Asian and Desi and Indigenous and Latin, but none are quite as fucked up, imo, as the fact that we as a culture attempt to continue to use the concept of “Whiteness” as a non-racist construction. In my thinking, it can be a useful tool for studying the past and studying an unhealthy set of attitudes we’re still learning to unlearn. But it’s not possible to reform the concept, because it’s fundamentally constructed upon beliefs we’re trying to discard. If you replace every use of “white” with “not one of the lesser races”, then I think you get a better understanding of why it’s never going to stop causing problems as long as we try to use it in a non-racist way.

    Today, people who were told growing up to view themselves as “white” now feel a frankly understandable sense of grievance and cultural alienation. Because we’ve begun acting more consistently and recognizing that there’s really no benign version of white pride, but we never bothered to teach people to stop thinking of anyone as “white” or taught the people who identify as white to find pride in an actual culture. Midwestern in a culture. Irish is a culture. New Englander is a culture. White has never been a culture. But if we don’t ever acknowledge that the entire concept’s only value is as a tool to understand racism, it’s inevitable that a computer repeating back to us our own attitudes is going to look dumb, inconsistent and either racially biased for or against white people.

      • Andy@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        That’s LEGIT.

        I’m new to learning about caste discrimination, and every time I see it come up in the news I’m just gobsmacked. It seems very messed up.

    • Prunebutt@slrpnk.net
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      11 months ago

      I think the interesting thing about this is that these LLMs are essentially like children

      Naw, dog. LLMs are nothing like children. A child has an inaccurate model of the world in their heads. I can explain things to them and they’ll update their believs and understandings.

      LLMs don’t understand. Period.

      • Andy@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        6
        ·
        11 months ago

        I think this rigid thinking is unhelpful.

        I think this presentation – which at 10 months old is already quite dated! – does a good job examining these questions in a credible and credulous manner:

        Sparks of AGI: Early Experiments with GPT4 (presentation) (text)

        I fully recognize that there is a great deal of pseudomystical chicanery that a lot of people are applying to LLM’s ability to perform cognition. But I think there is also a great deal of pseudomystical chicanary underlying the mainstream attitudes towards human cognition.

        People point to these and say, ‘They’re not thinking! They’re just making up words, and they’re good enough at relating words to symbolic concepts that they credibly imitate understanding concepts! It’s just a trick.’ And I wonder: why are they so sure that we’re not just doing the same trick?

        • Prunebutt@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          This way of thinking is accurate. And hyping LLMs to be a precursor to AGI is actually the unhelpful thing, IMHO.

          I recommend you look a bit at the work Emily M. Bender is doing. She’s a computational linguist and doesn’t have much good to say about the “Sparks of AGI” paper.

          why are they so sure that we’re not just doing the same trick?

          Because: Even if we don’t know what makes up conscience, we DO know a fair bit of how language works. And LLMs can mimic form, but lack some semblance of intentionality. Again, Emily M. Bender can summize this better as I could.

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I can’t take that guy seriously. 16 minutes in he’s saying the model is learning while also saying it’s entirely frozen.

          It’s not learning, it’s outputting different data that was always encoded in the model because of different inputs.

          If you taught a human how to make a cake and they recited it back to you and then went and made a cake a human demonstrably learned how to make a cake.

          If the LLM recited it back to you it’s because it either contained enough context in its window to still have the entire recipe and then ran it through the equivalent of “summarize this - layers” OR it had the entire cake recipe encoded already.

          No learning, no growth, no understanding.

          The argument of reasoning is also absurd. LLMs have not been shown to have any emergent properties. Capabilities are linear progress based on parameters size. This is great in the sense that scaling model size means scaling functionality but it is also directly indicative that “reason” is nothing more than having sufficient coverage of concepts to create models.

          Which of course LLMs have models: the entire point of an LLM is to be an encoding of language. Pattern matching the inputs to the correct model improves as model coverage improves: that’s not unexpected, novel or even interesting.

          What happens as an LLM grows in size is that decreasingly credulous humans are taken in by anthropomorphic bias and fooled by very elaborate statistics.

          I want to point out that the entire talk there is self described as non-quantitative. Quantitative analysis of GPT4 shows it abjectly failing at comparatively simple abstract reasoning tests, one of the things he claims it does well. Getting a 33% on a test that the average human gets above 90% on is a damn bad showing, barely above random chance.

          LLMs are not intelligent, they’re complex.

          But even in their greatest complexity they entirely fail to come within striking distance of even animal intelligence, much less human.

          Do you comprehend how complex your mind is?

          There are hundreds of neural transmitters in your brain. 20 billion neocortical neurons and an average 7 thousand connections per neuron. A naive complexity of 2.8e16 combinations. Each thought tweaking those ~7000 connections as it passes from neuron to neuron. The same thought can bounce between neurons, each time the signal getting to the same neuron it gets changed by the previous path, how long it has been since it last fired and the strengthened or weakened connection from other firings.

          If you compare parameters complexity to neural complexity that puts the average, humdrum human mind at 20,000x the complexity of a model that cost billions to train and make… Which is also static. Only changed manually when they get into trouble or find bettI can’t take that guy seriously. 16 minutes in he’s saying the model is learning while also saying it’s entirely frozen.

          It’s not learning, it’s outputting different data that was always encoded in the model because of different inputs.

          If you taught a human how to make a cake and they recited it back to you and then went and made a cake a human demonstrably learned how to make a cake.

          If the LLM recited it back to you it’s because it either contained enough context in its window to still have the entire recipe and then ran it through the equivalent of “summarize this - layers” OR it had the entire cake recipe encoded already.

          No learning, no growth, no understanding.

          The argument of reasoning is also absurd. LLMs have not been shown to have any emergent properties. Capabilities are linear progress based on parameters size. This is great in the sense that scaling model size means scaling functionality but it is also directly indicative that “reason” is nothing more than having sufficient coverage of concepts to create models.

          Which of course LLMs have models: the entire point of an LLM is to be an encoding of language. Pattern matching the inputs to the correct model improves as model coverage improves: that’s not unexpected, novel or even interesting.

          What happens as an LLM grows in size is that decreasingly credulous humans are taken in by anthropomorphic bias and fooled by very elaborate statistics.

          I want to point out that the entire talk there is self described as non-quantitative. Quantitative analysis of GPT4 shows it abjectly failing at comparatively simple abstract reasoning tests, one of the things he claims it does well. Getting a 33% on a test that the average human gets above 90% on is a damn bad showing, barely above random chance.

          LLMs are not intelligent, they’re complex.

          But even in their greatest complexity they entirely fail to come within striking distance of even animal intelligence, much less human.

          Do you comprehend how complex your mind is?

          There are hundreds of neural transmitters in your brain. 20 billion neocortical neurons and an average 7 thousand connections per neuron. A naive complexity of 2.8e16 combinations. Each thought tweaking those ~7000 connections as it passes from neuron to neuron. The same thought can bounce between neurons, each time the signal getting to the same neuron it gets changed by the previous path, how long it has been since it last fired and the strengthened or weakened connection from other firings.

          If you compare parameters complexity to neural complexity that puts the average, humdrum human mind at 20,000x the complexity of a model that cost billions to train and make… Which is also static. Only changed manually when they get into trouble or find better optimizations.

          And it’s still deeply flawed and incapable of most tasks. It’s just very good at convincing you with generalizations.

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Sorry but you’re giving LLMs way too much credit. All they do is very crude guesswork based on patterns & pattern recognition, with a bunch of “randomness” added into it, to at least make it feel somewhat natural. But if you spent any lengthy time chatting with one, then the magic wears off very quickly.

          But at least you didn’t call them “AI”.

      • slacktoid@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        11 months ago

        Do you know what word2vec is and how those vectors are generated?

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      Here’s an idea: what if the intent of the prompt had nothing to do with race, that it was prompting simple artistic expression no different than prompting hair, or shirt, or sky colour?

      Whiteness makes no sense. Who is white is highly subjective.

      Skin tone can be measured pretty objectively. We have colour standards for describing and reproducing colours with a degree of accuracy that is sufficient for practical purposes. The label “white” itself is quite non-specific. But the entire point of the AI is to fill in the blanks anyway, to generate content from non-specific prompts. I don’t agree that trainers can’t generate some consensus about the typical colour values for “white” skin tone. “I know it when I see it.”

      Society has an absurd and unhealthy obsession with race and all that baggage.

      • Andy@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        I think you’re wildly missing the point.

        When someone asks to see a “white family”, they are not asking for a family with skin of a certain shade. They’re asking for an image in which our pattern recognition identifies in their clothes, posture, hair style, and facial features that they look like people who could appear in a soap ad in the 1950’s. That they look like people who feel totally welcome in their society. They live a certain lifestyle. Simply changing color is the point of the problem. Koreans look pretty white in skin color, but they have other facial features that communicate that their parents or ancestors father back left the land of their birth and traveled to the US likely after 1900. Additionally, based on their dress some people might look at an image of a family with a Korean dad and say, ‘Great, that’s a white family’, while others would say, ‘Why did the model generate this? I asked for a white family.’

        There’s a world of context that our current racial terminology can’t capture because it’s not suited to our modern understanding of culture.

      • Prunebutt@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Skin tone can be measured pretty objectively.

        Yeah, so does your skull shape.

        So-called “races” are social contructs, which have only tangential overlap with measurable reality.

        “White people” is the european social construct of the “default” human being. The “absence” of race.

        That’s why “racism against white people” doesn’t exist.