

In many cases that kind of answer is correct though. People ask for things that aren’t a good idea on a regular basis. Sometimes what they want is correct for their circumstances, but often not.
In many cases that kind of answer is correct though. People ask for things that aren’t a good idea on a regular basis. Sometimes what they want is correct for their circumstances, but often not.
There is a manual pre-installed on your machine for most commands available. You just type man and the name of the thing you want the manual for. Many commands also have a --help option that will give you a list of basic options.
I should point out this isn’t Linux specific either. Many of these commands come from Unix or from other systems entirely. macOS has a similar command line system actually. It’s more that Linux users tend to use and recommend the command line more. Normally because it’s the way of doing things that works across the largest number of distributions and setups, but also because lots of technical users prefer command line anyway. Hence why people complain about Windows command lines being annoying. I say command lines because they actually have two of them for some odd reason. Anyway I hope this helped explain why things are the way they are.
KDE has settings for extra mouse buttons. Linux Mint is kind of behind in several areas unfortunately.
Probably KDE settings can deal with this. At least that worked on mine. Hyprland also has stuff for remapping extra mouse buttons.
Yeah this is terrible from a security and usability point of view. Just stop using proprietary bs systems. Why do you think so many technical people use Linux and avoid IoT devices like the plague? So we don’t have to deal with companies doing stuff we don’t like without a choice.
Because it’s a lot simpler and avoids the issue of dealing with printer drivers on all your machines.
It’s not an excuse. Socketed RAM has been a bottleneck for iGPUs for a while now.
They kind of did. What other chip allows for 128 GB of VRAM or has that kind of iGPU?
I’ve tried making this argument before and people never seem to agree. I think Google claims their Kubernetes is actually more secure than traditional VMs, but how true that really is I have no idea. Unfortunately though there are already things we depend upon for security that are probably less secure than most container platforms, like ordinary unix permissions or technologies like AppArmour and SELinux.
Did back propagation even exist in the 60s? That was a pretty fundamental change in what they do.
If we are arguing about really fundamental changes then arguably any neural network is the same and humans are the same as ChatGPT or a mouse, or even something simpler like a single layer perceptron.
I know, I have used them. It’s actually my job to do research with those kinds of models. They aren’t nearly as powerful as current OpenAI’s GPT-4o or their latest models.
I think he’s talking about people using LLMs for illegal and unethical activities such as fishing. There are already a lot of people using LLMs that are open source without ethics restrictions to do bad stuff, with the power of GPT4 behind them they would be a lot more effective.
That’s not true though. The models themselves are hella intensive to train. We already have open source programs to run LLMs at home, but they are limited to smaller open-weights models. Having a full ChatGPT model that can be run by any service provider or home server enthusiast would be a boon. It would certainly make my research more effective.
There is a lot that can be discussed in a philosophical debate. However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work. This suggests me that it’s not just a model/training difference. Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.
Actually humans have more computing power than is required to run an LLM. You have this backwards. LLMs are comparably a lot more efficient given how little computing power they need to run by comparison. Human brains as a piece of hardware are insanely high performance and energy efficient. I mean they include their own internal combustion engines and maintenance and security crew for fuck’s sake. Give me a human built computer that has that.
Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.
I agree here. I do think though that LLMs are closer than you think. They do in fact have both attention and working memory, which is a large step forward. The fact they can only process one medium (only text) is a serious limitation though. Presumably a general purpose AI would ideally have the ability to process visual input, auditory input, text, and some other stuff like various sensor types. There are other model types though, some of which take in multi-modal input to make decisions like a self-driving car.
I think a lot of people romanticize what humans are capable of while dismissing what machines can do. Especially with the processing power and efficiency limitations that come with the simple silicon based processors that current machines are made from.
No actually it has changed pretty fundamentally. These aren’t simply a bunch of FCNs put together. Look up what a transformer is, that was one of the major breakthroughs that made modern LLMs possible.
ChatGPT 4o isn’t even the most advanced model, yet I have seen it do things you say it can’t. Maybe work on your prompting.
Exactly this. Things have already changed and are changing as more and more people learn how and where to use these technologies. I have seen even teachers use this stuff who have limited grasp of technology in general.
AGI and ASI are what I am referring to. Of course we don’t actually have that right now, I never claimed we did.
It is hilarious and insulting you trying to “erm actually” me when I literally work in this field doing research on uses of current gen ML/AI models. Go fuck yourself.
If and until the abilities of AI reach the point where they can compensate tech illiteracy and we no longer need to worry about the exorbitant heat production, it shouldn’t be deployed at scale at all, and even then its use needs to be scrutinised, regulated and that regulation is appropriately enforced (which basically requires significant social and political change, so good luck).
Why wouldn’t you deploy that kind of AI at scale?
To be honest I think people keep forgetting that AI strong enough would be smarter than a human, and would probably end up deploying us at scale rather than the other way around. Terminator could one day actually happen. I am not even sure that would be a bad thing given how flawed humans are.
You could always put Linux on it. I believe there is a way to do that for most ChromeBooks nowadays.