

I’m convinced that people who are fascinated by llm chatbots are those who usually aren’t better than a chatbot at whatever they do. That is to say, they can’t do shit.
I’m convinced that people who are fascinated by llm chatbots are those who usually aren’t better than a chatbot at whatever they do. That is to say, they can’t do shit.
When everyone was talking about them paying their workers fairly, I did not expect it being 60 bucks a year. It sounds like an insult to be honest.
Fair use or something
We’re trying to build the Torment Nexus from the famous novel Don’t Build The Torment Nexus, but keep failing miserably and the Torment Nexus keeps blowing up killing random bystanders. We continue to try though.
You either an llm, or don’t know how your brain works.
What made you think of this idea?
The one where you aren’t frustrated by the usage of it.
Yeah, you need to be really brave to setup your system incorrectly.
You’re doing something wrong, maybe ask someone knowledgeable for help with your system. It doesn’t happen to other people.
Can’t confirm, I’m old and a nerd and I love C++
Oh yeah, you absolutely can test it.
And then it gives you (and this is a real example, with real function names removed)
find_something > dirpath
…
rm - rf $dirpath/*
do_something_in_the_dir(dirpath)
And it will work, but on a failure of a first question, instead of failing gracefully it wipes your hard drive clean.
You can find shit like that on the regular Internet, but the difference is, it will be downvoted and some nerd will leave a snarky comment explaining why it’s stupid. When llm gives you that, you don’t have ways to distinguish a working code from a slow boiling trap
As with a lot of things in your life, you think you know something, but actually you don’t.
You. I couldn’t avoid you. That’s what I’m mildly afraid of.
Steam OS is basically Arch Linux with KDE and Steam that autoruns in their special mode. It would even be easier to setup for yourself.
What’s scary is that chatbots will make more people like you.
I see you aren’t grasping the concept, and now are saying some random words to hide this fact. But then again, it is to be expected, we kind of started with the idea that you lack higher cognitive functions
More like how some people are afraid of needles but aren’t afraid of deadly diseases. Their primitive understanding of reality allows them to draw connection between prick and pain, but not between an invisible to the naked eye organism and a gruesome death.
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.
It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I’m pretty sure both Nebula and Floatplane look for their talents themselves, you can’t just decide to be there. And they look for something more sophisticated than a guy yelling n-word at children in a currently popular videogame.