

I find that LLMs also tend to create very placative, kitschy content. Nuance is beyond them.
I find that LLMs also tend to create very placative, kitschy content. Nuance is beyond them.
Honestly, Windows isn’t ready for the desktop, either, it’s just not ready in a different way that most people are familiar with.
Things like an OS update breaking the system should be rare, not so common that people are barely surprised when it happens to them. In a unified system developed as one integral product by one company there should be one config UI, not at least three (one of which is essentially undocumented). “Use third-party software to disable core features of the OS” shouldn’t be sensible advice.
Windows is horribly janky, it’s just common enough that people accept that jank as an unavoidable part of using a computer.
Not everyone needs to talk to everyone. But many people need to talk to many people.
Microsoft had to abandon the initial Vista project and start over because they couldn’t manage a team of 1000 developers. People working on adjacent features had to go through so many layers of management that in some cases the closest shared manager was Bill Gates. For something like getting a change in the shutdown code reflected in the shutdown dialog.
Huge teams become exponentially harder to manage efficiently.
Most modern mainboards don’t even support ACPI S3/S4 anymore. The ACPI spec is pretty badly written and most implementations were flaky in some way. So when ACPI S0ix (aka Modern Standby) came around the old states were essentially abandoned.
Of course S0ix is less a hibernation and more kindly asking the OS to turn off the screen and consider using fewer resources.
Welp, there goes the neighborhood. If they want to do an IPO they’ll probably enshittify the hell out of the platform and jettison all remotely raunchy communities. Because nothing says “good investment” than a service that just drove out a fair chunk of its user base.
That undersells them slightly.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.
They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.
It’ll be marketed as Skyrim with all LLM text and end up as Oblivion with prefab text chunks.
Even disregarding the fact that current LLMs can’t stop hallucinating and going off track (which seems to be an inherent property of the approach), they need crazy accounts of memory. If you don’t want the game to use a tiny model with a bad quantization, you can probably expect to spend at least 20 gigs of VRAM and a fair chunk of the GPU’s power on just the LLM.
What we might see is a game that uses a small neural net to match freeform player input to a dialogue tree. But that’s nothing like full LLM-driven dialogue.
Given that prisons are an industry in the States and that inmates are one of their main sources of cheap labor, the high recidivism rate is there to maximize profits.
Because giving answers is not a LLM’s job. A LLM’s job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.
I remember talking to someone about where LLMs are and aren’t useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn’t been trained on.
The other person insisted that that is exactly what LLMs are great at. They wouldn’t explain how exactly the LLM was supposed to know how my company’s internal software, which is a trade secret, is structured.
But hey, I figured I’d give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions… on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.
Oddly enough, that’s not what my company’s ASDIS is. It’s almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.
I manually disabled HSP in pulseaudio. I’d rather use an external mic than subject myself to the atrocious audio quality of HSP.
I installed Garuda and then immediately switched my theme to Breeze. I don’t know what that says about me.
I wouldn’t call their Windows support stellar, either. There’s only one error code for any and all problems and RTXes can be damn finicky if you’re unlucky.
sfc /scannow
does fix certain problems, just not nearly as many as the Microsoft support forum would like.
I do agree with you on the log, although that’s often because whichever component is misbehaving just doesn’t believe in error logs. I’m looking at you, Nvidia.
That’s why I’ll make damn sure they’ll make that second branch first.
Mind you, the most likely result is that I’ll still see branches with 50+ commits with meaningless names because nobody ever rebases anything.
I’m kinda planning on teaching my team how to use interactive rebases to clean the history before a merge request.
The first thing they’ll learn is to make a temporary second branch so they can just toss their borked one if they screw up. I’m not going to deal with their git issues for them.
My bad. I went with the WD Red Plus, model WD40EFPX. It’s basically the successor to the old CMR Red line. The Pro line has 7200 RPM and is a bit noisier, which isn’t great for a living room server.
I’ll correct my earlier comment.
Yeah, it doesn’t take a lot to build a decent home server. I just rebuilt mine (the old one’s Turion II Neo was perhaps a bit too weak) and the most expensive part were the HDDs. I didn’t want to reuse the old ones.
A slightly underclocked Athlon 3000G, 16 gigs of spare RAM, and three 4 TB WD Red Pluses give me all the power I actually need at a reasonable power budget. I initially wanted to go with an N100 but those never support more than two SATA drives directly.
Depends. If you want something that will keep your files reasonably safe and accessible then a laptop isn’t great because most of them won’t let you mount multiple hard drives without doing something silly like running everything over USB.
Of course that’s where an old desktop is the computer of choice.
Now the big question is how many patents are relevant and who owns them. And even if it turns out to have cheap licensing, beating HDMI won’t be easy, as DisplayPort demonstrates. Technological superiority doesn’t mean shit if you can’t overcome HDMI’s network effect.