

Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.
(header photo by Brian Maffitt)
Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol
For platforms that don’t accept those types of edits, the link OP tried to submit: https://www.theverge.com/news/690815/bill-gates-linus-torvalds-meeting-photo
An empty stomach
Hungry for my beloved starch
Life in Latvia
Knock at door. “Who is?” “Free potato”. Open door. Is secret police.
We just sent the code
Somehow this phrase triggered a memory of this short comedy sketch: https://youtu.be/LButXcZ57pc
Tbh I thought it was a bunch of non-lemmy platforms (e.g., mbin which fedia.io runs - anecdotally it usually happens due to some types of edits not federating well), but if someone from infosec.pub (which runs lemmy) also had the problem then I’m actually not sure what the common factor is lol
edit: the common factor might just be instances that have blocked lemmy.ml, which currently includes fedia.io (my instance) and infosec.pub (the other commenter’s instance), though I’m surprised links to lemmy.ml’s hosted images are included in the block
Yes! It still maintains some features not in mainline Mastodon, which I guess is why infosec.exchange runs it
Image link for those on platforms that don’t see it (e.g., me): https://lemmy.ml/pictrs/image/745658dd-60ef-44f9-bcf9-290aa9f23573.webp
That video of them interviewing people on the street with it was pretty fun!
Wow, they literally added more horse armor lol
FYI: OpenCritic average is moderately lower at (currently) 78/100 (82% recommend) https://opencritic.com/game/18413/tempest-rising
After many years of selectively evaluating and purchasing bundles as my main source of new games, I’ve come to wonder if it would’ve been better to just buy the individual games when I wanted to play them at whatever the available price was - the rate at which I get through games is far lower than the rate at which games are available in “good” bundles. In the end I’m not even sure if I’ve saved money (because of how many games have been bought but are as-of-yet unplayed) and it does take more time to evaluate whether something’s a good deal or not.
The upside is way more potential variety of games to pull from in my library, but if I only play at most like 1-2 dozen new games a year then I’m not sure that counts for much 🫠
So they literally agree not using an LLM would increase your framerate.
Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.
Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?
If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.
For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.
It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:
Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.
When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)
Eh, I think that one’s mostly on the community / players giving up games as soon as anything bad happens (making the 30-70 and 40-60 games where you still have decent odds of winning more like 5-95 games which become a self-fulfilling prophecy), plus regular players getting better over time (mistakes and misplays are more likely to be punished and leads are more likely to be capitalized on).
The give-up culture wasn’t as bad much earlier in the game’s life, at least in my NA-centric exposure to solo queue.
Searching for the phrase, documentation matches for Taiga so maybe you’re right!
Would be curious to read the LLM output.
It looks like it’s available in the linked study’s paper (near the end)
I do some game modding, and sometimes have to hack together software to help with it, some of which ends up public.
One of my programs relied on the location of other, existing files and so would poke around at runtime to see where the user had launched it from, alerting the user if it was in a location where it wasn’t supported. If that happened, an interactive message box pops up with the title “UNSUPPORTED LOCATION” and text that says, verbatim sans my [notes]:
"Running [this program] from [unsupported] folder is NOT SUPPORTED, and is likely to produce errors. Run [other program] instead.
If you want to run [this program] from here anyway, type “I understand”.
You can’t skip or just “OK” the message to dismiss it, otherwise the program just immediately begins a managed shutdown of itself to prevent any of the aforementioned potential errors from occurring. I STILL had a user message me saying how making them type in “I understand” was a weird thing to make them do in order to use the program. Thankfully I think they’ve been the only one so far so it’s certainly not the norm, but the average computer user is also much less tech-savvy than someone downloading mods for a video game.
I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.
Misc points:
So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.