Exactly, they’re a captive audience, and moreover they are legally incompetent to consent to a contracted business relationship like this
If this was a department of education AI or even some kind of transparently administered non-profit organization I’d be fine with this, but the fact that this is being developed for some for profit company that can just jack their rates and cut off public schools whenever they want to is bullshit. Like, I’m not opposed to the technology of LLMs at all, I think they’re actually pretty neat, but our social and economic systems have a lot of exploitative trash in them that cool technologies can inadvertently exacerbate.
An American security contractor and a Chinese embassy employee are at a bar. The American says, “I gotta say, your propaganda is impressive. You sure know how to keep your people in line.”
“Oh, you’re too gracious,” the embassy worker says. “And besides, it’s nothing compared to American propaganda.”
The contractor chokes on his drink and gives his friend a bewildered look.
“What are you talking about? There’s no propaganda in America.”
There are more than three things
Not everything that is worth discussing has a source. Abstract ideas and hypothetical scenarios (among other things) have their places in rhetoric and communication.
A lot of people who think they’re saying “[actual fact]” are really just stating “[subjective opinion]” and call any criticism of their opinions “[incoherent rage]”
deleted by creator
Yeah, I’m not a Texan but I also disagree about this. Also, Austin has produced some amazing music over the years (for example, random Austin band I’ve been in love with recently is Being Dead).
Gerrymandered Congress, no campaign finance laws, bought and paid for judiciary - Allow us to introduce ourselves
They’re very good at identifying talented developers /s
The devices should be returned to inmates immediately, prison administrators should then slap themselves in the face one time for implementing them poorly to begin with, slap themselves in the face several times for overreacting to a viral story without having any reason to believe there was an active or imminent problem with any of their inmates, and deliver a tooth-loosening punch to their own faces for thinking they could punish these inmates by taking away their education to cover their screw up.
After that, hire a real IT person who knows what they’re doing by paying them decently allowing remote work and not drug testing, and then listen to them.
Not victimizing all of the student inmates because the prison invested in a poorly designed system that could potentially be exploited when none of the students have attempted that exploit or were likely even aware of it
Every prisoner who knew about that password
Meanwhile, back in reality
Wright confirmed no one incarcerated in Washington prisons had attempted to unlock their devices but said the decision was “made out of an abundance of caution.”
They were taken for reasons that inmates had nothing to do with, they have not been replaced, and it’s unclear when they’ll be returned. Inmates who are enrolled in college courses are having to handwrite papers that are due soon.
Also, if something is technically possible but illegal for the CIA/FBI/etc to do, it just means they have to try to hide the fact they’re still doing it
Maybe this mirror of it will?
But I’m guessing it talking about the claim only ~9% of the time officers were able to confirm a firearm was present on the scene.
Don’t think that shows up, this article is previously unpublished stuff I believe
For at least nine months, between October 2017 and July 2018, Scott DeDore tracked ShotSpotter’s accuracy in identifying confirmed gunshots. DeDore regularly shared his findings with Chicago police and ShotSpotter, and even attempted to hone the tool’s precision by working alongside the company to install additional sensors, documents obtained through public records requests show. Over the course of those nine months, according to the records, ShotSpotter correctly detected a gunshot in 63 of 135 instances in which a person was struck, an accuracy rate of about 47 percent.
One month after DeDore sent his last available report, then mayor Rahm Emanuel signed a new three-year, $33 million contract with ShotSpotter (the company has since rebranded as SoundThinking). It covered 12 police districts—100 square miles—and made Chicago the company’s largest customer at the time.
These records represent a look into a small corner of Chicago’s southwest side from more than half a decade ago. But they offer a unique window into ShotSpotter and its role in an increasingly surveilled city. And they came at a time when the city was reinventing its policing strategy. Six years later, Chicago is again at a crossroad, as a new mayoral administration “reimagines” public safety and mulls the fate of ShotSpotter when its contract expires in mid-February.
Fair enough, but I think this article is reasonably critical
But critics warn the system is unproven at best — and at worst, providing a technological justification for the killing of thousands of Palestinian civilians.
“It appears to be an attack aimed at maximum devastation of the Gaza Strip,” says Lucy Suchman, an anthropologist and professor emeritus at Lancaster University in England who studies military technology. If the AI system is really working as claimed by Israel’s military, “how do you explain that?” she asks
…
The Israeli military did not respond directly to NPR’s inquiries about the Gospel. In the November 2 post, it said the system allows the military to “produce targets for precise attacks on infrastructures associated with Hamas, while causing great damage to the enemy and minimal harm to those not involved,” according to an unnamed spokesperson.
But critics question whether the Gospel and other associated AI systems are in fact performing as the military claims. Khlaaf notes that artificial intelligence depends entirely on training data to make its decisions.
“The nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and correlations from historical data, and not any type of reasoning, factual evidence, or ‘causation,’” she says. “Given the track record of high error-rates of AI systems, imprecisely and biasedly automating targets is really not far from indiscriminate targeting.”
Some accusations about the Gospel go further. A report by the Israeli publication +972 Magazine and the Hebrew-language outlet Local Call asserts that the system is being used to manufacture targets so that Israeli military forces can continue to bombard Gaza at an enormous rate, punishing the general Palestinian population.
Each one of those has a bunch of particular nuances, but in general - yeah, I think they could and should in a lot of those cases
Yeah, it’s a big problem with a lot of little parts to be tackled
Then government should give them the resources (actually, I think a whole separate agency that develops open source software for any government agency or anyone else who wants to use them should be established, but that’s kind of besides the point).
I don’t think that’s true, and even if it were I think we should be willing to pay premium to make sure essential systems that support the public good are being administered in democratic ways (e.g. by public agencies that are required to give public reports to elected lawmakers and be subject to citizens’ FOIA requests).
A lot of stupid ideas hang on for a really long time. Like, we still have monarchies in the 21st century world.
I 100% agree this is a significant problem too, I just haven’t come across any good articles about it recently