Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

  • Andy@slrpnk.net
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    11 months ago

    I think this rigid thinking is unhelpful.

    I think this presentation – which at 10 months old is already quite dated! – does a good job examining these questions in a credible and credulous manner:

    Sparks of AGI: Early Experiments with GPT4 (presentation) (text)

    I fully recognize that there is a great deal of pseudomystical chicanery that a lot of people are applying to LLM’s ability to perform cognition. But I think there is also a great deal of pseudomystical chicanary underlying the mainstream attitudes towards human cognition.

    People point to these and say, ‘They’re not thinking! They’re just making up words, and they’re good enough at relating words to symbolic concepts that they credibly imitate understanding concepts! It’s just a trick.’ And I wonder: why are they so sure that we’re not just doing the same trick?

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I can’t take that guy seriously. 16 minutes in he’s saying the model is learning while also saying it’s entirely frozen.

      It’s not learning, it’s outputting different data that was always encoded in the model because of different inputs.

      If you taught a human how to make a cake and they recited it back to you and then went and made a cake a human demonstrably learned how to make a cake.

      If the LLM recited it back to you it’s because it either contained enough context in its window to still have the entire recipe and then ran it through the equivalent of “summarize this - layers” OR it had the entire cake recipe encoded already.

      No learning, no growth, no understanding.

      The argument of reasoning is also absurd. LLMs have not been shown to have any emergent properties. Capabilities are linear progress based on parameters size. This is great in the sense that scaling model size means scaling functionality but it is also directly indicative that “reason” is nothing more than having sufficient coverage of concepts to create models.

      Which of course LLMs have models: the entire point of an LLM is to be an encoding of language. Pattern matching the inputs to the correct model improves as model coverage improves: that’s not unexpected, novel or even interesting.

      What happens as an LLM grows in size is that decreasingly credulous humans are taken in by anthropomorphic bias and fooled by very elaborate statistics.

      I want to point out that the entire talk there is self described as non-quantitative. Quantitative analysis of GPT4 shows it abjectly failing at comparatively simple abstract reasoning tests, one of the things he claims it does well. Getting a 33% on a test that the average human gets above 90% on is a damn bad showing, barely above random chance.

      LLMs are not intelligent, they’re complex.

      But even in their greatest complexity they entirely fail to come within striking distance of even animal intelligence, much less human.

      Do you comprehend how complex your mind is?

      There are hundreds of neural transmitters in your brain. 20 billion neocortical neurons and an average 7 thousand connections per neuron. A naive complexity of 2.8e16 combinations. Each thought tweaking those ~7000 connections as it passes from neuron to neuron. The same thought can bounce between neurons, each time the signal getting to the same neuron it gets changed by the previous path, how long it has been since it last fired and the strengthened or weakened connection from other firings.

      If you compare parameters complexity to neural complexity that puts the average, humdrum human mind at 20,000x the complexity of a model that cost billions to train and make… Which is also static. Only changed manually when they get into trouble or find bettI can’t take that guy seriously. 16 minutes in he’s saying the model is learning while also saying it’s entirely frozen.

      It’s not learning, it’s outputting different data that was always encoded in the model because of different inputs.

      If you taught a human how to make a cake and they recited it back to you and then went and made a cake a human demonstrably learned how to make a cake.

      If the LLM recited it back to you it’s because it either contained enough context in its window to still have the entire recipe and then ran it through the equivalent of “summarize this - layers” OR it had the entire cake recipe encoded already.

      No learning, no growth, no understanding.

      The argument of reasoning is also absurd. LLMs have not been shown to have any emergent properties. Capabilities are linear progress based on parameters size. This is great in the sense that scaling model size means scaling functionality but it is also directly indicative that “reason” is nothing more than having sufficient coverage of concepts to create models.

      Which of course LLMs have models: the entire point of an LLM is to be an encoding of language. Pattern matching the inputs to the correct model improves as model coverage improves: that’s not unexpected, novel or even interesting.

      What happens as an LLM grows in size is that decreasingly credulous humans are taken in by anthropomorphic bias and fooled by very elaborate statistics.

      I want to point out that the entire talk there is self described as non-quantitative. Quantitative analysis of GPT4 shows it abjectly failing at comparatively simple abstract reasoning tests, one of the things he claims it does well. Getting a 33% on a test that the average human gets above 90% on is a damn bad showing, barely above random chance.

      LLMs are not intelligent, they’re complex.

      But even in their greatest complexity they entirely fail to come within striking distance of even animal intelligence, much less human.

      Do you comprehend how complex your mind is?

      There are hundreds of neural transmitters in your brain. 20 billion neocortical neurons and an average 7 thousand connections per neuron. A naive complexity of 2.8e16 combinations. Each thought tweaking those ~7000 connections as it passes from neuron to neuron. The same thought can bounce between neurons, each time the signal getting to the same neuron it gets changed by the previous path, how long it has been since it last fired and the strengthened or weakened connection from other firings.

      If you compare parameters complexity to neural complexity that puts the average, humdrum human mind at 20,000x the complexity of a model that cost billions to train and make… Which is also static. Only changed manually when they get into trouble or find better optimizations.

      And it’s still deeply flawed and incapable of most tasks. It’s just very good at convincing you with generalizations.

    • Prunebutt@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      This way of thinking is accurate. And hyping LLMs to be a precursor to AGI is actually the unhelpful thing, IMHO.

      I recommend you look a bit at the work Emily M. Bender is doing. She’s a computational linguist and doesn’t have much good to say about the “Sparks of AGI” paper.

      why are they so sure that we’re not just doing the same trick?

      Because: Even if we don’t know what makes up conscience, we DO know a fair bit of how language works. And LLMs can mimic form, but lack some semblance of intentionality. Again, Emily M. Bender can summize this better as I could.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Sorry but you’re giving LLMs way too much credit. All they do is very crude guesswork based on patterns & pattern recognition, with a bunch of “randomness” added into it, to at least make it feel somewhat natural. But if you spent any lengthy time chatting with one, then the magic wears off very quickly.

      But at least you didn’t call them “AI”.