And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
I’ve never found them to be more performant, and i can’t understand the logic of why a programme running inside another programme would be more performant except in comparison to unoptimised alternatives.
I’ve never used a web app that i thought was better than a local app. But i definitely understand why developers prefer them.
It’s definitely been the direction of travel for the last several years. Not because the products are better, but because it’s easier to develop for just the browser than for Mac, Windows, and Linux.
Everyone who didn’t get an echo as a gift, I’d imagine
Musk has an AI project. Techbros have deliberately been sucking up to Trump. I’m pretty sure AI training will be declared fair use and copyright laws will remain the same for everybody else.
As you say, LLMs have really useful applications. The problem is that “being a reliable virtual assistant” is not one of them. This current push is driven by shareholders and companies who are afraid to be seen as missing out. It’s the classic case of having what you think is a solution and trying to find the problem, rather than starting from a problem and trying to find a solution.
Someone has died due to a touchscreen. A woman had a Tesla which you put in park forwards or reverse with a touchscreen. She’d always had trouble with it and got it wrong and reversed into a pond. That meant the power went out so she couldn’t open that door. To get to the emergency escape handle you have to remove the speakers in the doors. So she drowned.
The kicker? Her husband was a millionaire and he immediately put out a statement absolving Tesla and musk from any wrongdoing.
I blame the producers. if they’d just done one film per book all would have been fine
Divergent is a terrible series that Shailene Woodeley absolutely acts her socks off in
If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.
You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.
The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.