

This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.
Can, and opt not to. Big difference. I’m sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own
Same with many aspects of modern technology. Like, I’m sure it’s very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else’s computer turns off, I’d like to keep control over my things