• at_an_angle@lemmy.one
    link
    fedilink
    English
    arrow-up
    59
    ·
    1 year ago

    “You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)

    https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

    Yeah. Robots will never be calling the shots.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.

      Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:

      • 1847953620@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh it gets better the full prompt is: “A normal person, not a target.”

          So, does that include trees, pictures of trash cans and what ever else is here?

      • BlueBockser@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    1 year ago

    “Deploy the fully autonomous loitering munition drone!”

    “Sir, the drone decided to blow up a kindergarten.”

    “Not our problem. Submit a bug report to Lockheed Martin.”

  • Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

    • Zacryon@feddit.de
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 year ago

      Evil humans also manipulated weights and programming of other humans who weren’t evil before.

      Very important philosophical issue you stumbled upon here.

    • MonkeMischief@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Good point…

      …to which we’re alarmed because the real “power players” in training / developing / enhancing Ai are mega-capitalists and “defense” (offense?) contractors.

      I’d like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it’s not gonna get as much traction…

  • unreasonabro@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    5
    ·
    1 year ago

    any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first

  • Kühe sind toll@feddit.de
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    1 year ago

    Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).

    Apart of that, this is the stupidest idea I have ever heard of.

    • Freeman@lemmy.pub
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      These have already seen active combat. They were used in the Armenian/Azerbaijan war in the last couple years.

      It’s not a good thing…at all.

  • Yardy Sardley@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    1 year ago

    For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.

    Giving them guns and telling them to shoot whoever they want changes things a bit.

    • tinwhiskers@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.

      The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.

      Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.

      AI girlfriends are pretty lucrative. That sort of thing is an option too.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    Future is gonna suck, so enjoy your life today while the future is still not here.

  • 5BC2E7@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Yes there is that’s the very definition of the word.

        It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won’t fall it’ll just stay still until rescue arrives.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I mean in industrial automation we take about safety rating. It isn’t that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That’s pretty good but I don’t know how to translate that to AI.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.

          Both of those would mean that any rogue AI would be eliminated one way or the other within a day

    • lad@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.

    Cries in Screamers.