cultural reviewer and dabbler in stylistic premonitions
Via the pine64 blog update about their e-ink tablet TIL about inkput (using OnlineHTR) which appears to be a step in the right direction.
“Given they were trained on our data, it makes sense that it should be public commons – that way we all benefit from the processing of our data”
I wonder how many people besides the author of this article are upset solely about the profit-from-copyright-infringement aspect of automated plagiarism and bullshit generation, and thus would be satisfied by the models being made more widely available.
The inherent plagiarism aspect of LLMs seems far more offensive to me than the copyright infringement, but both of those problems pale in comparison to the effects on humanity of masses of people relying on bullshit generators with outputs that are convincingly-plausible-yet-totally-wrong (and/or subtly wrong) far more often than anyone notices.
I liked the author’s earlier very-unlikely-to-be-met-demand activism last year better:
…which at least yielded the amusingly misleading headline OpenAI ordered to delete ChatGPT over false death claims (it’s technically true - a court didn’t order it, but a guy who goes by the name “That One Privacy Guy” while blogging on linkedin did).
I looked at that, and thought “ha, that is a funny and obviously fake screenshot of a headline, created to ridicule photomatt for being petty in his fight with his company’s biggest competitor”.
Then, after closing this tab I did a double take and thought: maybe it’s actually real?
And, it turns out, yeah, he really actually did that (after a court injunction required them to remove the checkbox which required users to pledge that they were “not affiliated with WP Engine in any way, financially or otherwise”):
same here; other articles there load fine but this one gives me HTTP 500 with content-length 0.
(the empty body tag in your screenshot is generated by firefox while rendering the zero-length response from the server, btw.)
it’s more likely they’re a regular-sized linux user and it’s only their inflatable penguin which is giant
they do not work for individual applications
as someone else replied to you earlier, waypipe exists, and is packaged in distros, and does what you’re asking for.
There is also a newer thing called wprs, “Like xpra, but for Wayland, and written in Rust”: https://github.com/wayland-transpositor/wprs#comparison-to-waypipe which sounds promising
big oof.
We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.
the author of this substack is woefully misinformed about the state of technology 🤦
it has, in fact, been possible for several years already for anyone to quickly generate convincing images (not to mention videos) of fictional scenes in real locations with very little effort.
The photograph—which appeared on the Associated Press feed, I think—was simply taken from a higher vantage point.
Wow, it keeps getting worse. They’re going full CSI on this photo, drawing a circle around a building on google street view where they think the photographer might have been, but they aren’t even going to bother to try to confirm their vague memory of having seen AP publishing it? wtf?
Fwiw, I also thought the image looked a little neural network-y (something about the slightly less-straight-than-they-used-to-be lines of some of the vehicles) so i spent a few seconds doing a reverse image search and found this snopes page from which i am convinced that that particular pileup of cars really did happen as it was also photographed by multiple other people.
RedHat was a major military contractor with job postings like this current one [archive] long before they were bought by another older and larger military contractor.
https://en.wikipedia.org/wiki/IBM_and_World_War_II
https://web.archive.org/web/20240530005438/https://www.redhat.com/en/resources/israeli-defense-forces-case-study (original is 404 for some reason)
only hobbyists and artisans still use the standalone carrot.py
that depends on peeler
.
in enterprise environments everyone uses the pymixedveggies
package (created using pip freeze
of course) which helpfully vendors the latest peeled carrot along with many other things. just unpack it into a clean container and go on your way.
The canonical documentation is https://www.kernel.org/doc/Documentation/filesystems/proc.rst (ctrl-f oom
) but if you search a bit you’ll find various guides that might be easier to digest.
https://www.baeldung.com/linux/memory-overcommitment-oom-killer looks like an informative recent article on the subject, and reminds me that my knowledge is a bit outdated. (TIL about the choom(1) command which was added to util-linux in 2018 as an alternative to manipulating things in /proc
directly…)
https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9 from 2018 might also be worth reading.
How to make your adjustments persist for a given desktop application is left as an exercise to the reader :)
I’m not sure what this comic is trying to say but in my recent experience a single misbehaving website can still consume all available swap at which point Linux will sometimes completely lock up for many minutes before the out-of-memory killer decides what to kill - and then sometimes it still kills the desktop environment instead of the browser.
(I do know how to use oom_adj
; I’m talking about the default configuration on popular desktop distros.)
404 Media neglected to link to her website, which is https://ada-ada-ada.art/
I think it depends which side of the debate one is on?
$ systemd-analyze calendar tomorrow
Failed to parse calendar specification 'tomorrow': Invalid argument
Hint: this expression is a valid timestamp. Use 'systemd-analyze timestamp "tomorrow"' instead?
$ systemd-analyze timestamp tuesday
Failed to parse "tuesday": Invalid argument
Hint: this expression is a valid calendar specification. Use 'systemd-analyze calendar "tuesday"' instead?
ಠ_ಠ
$ for day in Mon Tue Wed Thu Fri Sat Sun; do TZ=UTC systemd-analyze calendar "$day 02-29"|tail -2; done
Next elapse: Mon 2044-02-29 00:00:00 UTC
From now: 19 years 4 months left
Next elapse: Tue 2028-02-29 00:00:00 UTC
From now: 3 years 4 months left
Next elapse: Wed 2040-02-29 00:00:00 UTC
From now: 15 years 4 months left
Next elapse: Thu 2052-02-29 00:00:00 UTC
From now: 27 years 4 months left
Next elapse: Fri 2036-02-29 00:00:00 UTC
From now: 11 years 4 months left
Next elapse: Sat 2048-02-29 00:00:00 UTC
From now: 23 years 4 months left
Next elapse: Sun 2032-02-29 00:00:00 UTC
From now: 7 years 4 months left
(It checks out.)
Surprisingly its calendar specification parser actually allows for 31 days in every month:
$ TZ=UTC systemd-analyze calendar '02-29' && echo OK || echo not OK
Original form: 02-29
Normalized form: *-02-29 00:00:00
Next elapse: Tue 2028-02-29 00:00:00 UTC
From now: 3 years 4 months left
OK
$ TZ=UTC systemd-analyze calendar '02-30' && echo OK || echo not OK
Original form: 02-30
Normalized form: *-02-30 00:00:00
Next elapse: never
OK
$ TZ=UTC systemd-analyze calendar '02-31' && echo OK || echo not OK
Original form: 02-31
Normalized form: *-02-31 00:00:00
Next elapse: never
OK
$ TZ=UTC systemd-analyze calendar '02-32' && echo OK || echo not OK
Failed to parse calendar specification '02-32': Invalid argument
not OK
except: pass