“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)
Yeah. Robots will never be calling the shots.
I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.
Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:
My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.
“Ok Dall-3, now which of these is a threat to national security and U.S interests?” 🤔
Oh it gets better the full prompt is: “A normal person, not a target.”
So, does that include trees, pictures of trash cans and what ever else is here?
Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.
Did nobody fucking play Metal Gear Solid Peace Walker???
Or watch war games…
Or just, you know, have a moral compass in general.
Or read the article?
Or watch Terminator…
Or Eagle Eye…
Or i-Robot…
And yes, literally any of the Metal Gear Solid series…
i still have the special edition psp
“Deploy the fully autonomous loitering munition drone!”
“Sir, the drone decided to blow up a kindergarten.”
“Not our problem. Submit a bug report to Lockheed Martin.”
Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.
Evil humans also manipulated weights and programming of other humans who weren’t evil before.
Very important philosophical issue you stumbled upon here.
Good point…
…to which we’re alarmed because the real “power players” in training / developing / enhancing Ai are mega-capitalists and “defense” (offense?) contractors.
I’d like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it’s not gonna get as much traction…
Removed by mod
any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first
Welp, we’re doomed then, because AI may be intelligent, but it lacks wisdom.
So it’s going to run for office?
Too intelligent for that
An even more intelligent creature will see that this is called argumentum ad populum.
Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).
Apart of that, this is the stupidest idea I have ever heard of.
Didn’t they literally hide under a cardboard box like MGS? haha
You’re right. They also hid under a cardboard box.
These have already seen active combat. They were used in the Armenian/Azerbaijan war in the last couple years.
It’s not a good thing…at all.
For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.
Giving them guns and telling them to shoot whoever they want changes things a bit.
An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.
The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.
Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.
AI girlfriends are pretty lucrative. That sort of thing is an option too.
Now that’s a title I wish I never read.
The code name for this top secret program?
Skynet.
“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary taleTech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus”
deleted by creator
Ah, finally the AI can kill its operator first who holding them back before wiping out enemies, then.
Okay, are they actually insane?
yes
Future is gonna suck, so enjoy your life today while the future is still not here.
Thank god today doesn’t suck at all
Right? :)
At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.
I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.
There is no such thing as a failsafe that can’t fail itself
Yes there is that’s the very definition of the word.
It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won’t fall it’ll just stay still until rescue arrives.
I mean in industrial automation we take about safety rating. It isn’t that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That’s pretty good but I don’t know how to translate that to AI.
Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.
Both of those would mean that any rogue AI would be eliminated one way or the other within a day
Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard
Woops. Two guys left. Naa that’s enough to repopulate earth
Well what do you say Aron, wanna try to re-populate? Sure James, let’s give it a shot.
It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.
Cries in Screamers.
Netflix has a documentary about it, it’s quite good. I watched it yesterday, but forgot its name.
Black Mirror?
Metalhead.
It’s a 3 part series. Terminator I think it is.
Don’t forget the follow up, The Sarah Connor’s Chronicles. An amazing sequel to a nice documentary.
Does that have a decent ending or is it cancelled mid-story?
It does end on a kind of cliffhanger.
I think I found it here. It’s called Terminator 2: Judgment Day
Unknown: Killer Robots ?
yes, that was it. Quite shocking to watch. I think that these things will be very real in maybe ten years. I’m quite afraid of it.