No, I have an IQ below 42, pardon my mental disability sir.
No, I have an IQ below 42, pardon my mental disability sir.
Can I put an Nvidia 4090 in a mac for AI and gaming purposes?
As someone who writes C++ every day for work, up to version C++20 now, I hate the incoming C++23 even more somehow. The idea of concepts, it just… gets worse and worse. Although structured binding in C++17 did actually help some with the syntax, to be fair.
Sweet, do you have any links on how to set that up? My next goal is to set up my own lemmy. instance up so I can pull various things for my own aggregation. Last I tried, I had errors after the Rust compiling steps, need to try it agian.
I was shocked as I went through the source struggling to find any modules that had C. Craziness.
There are shitty people on YouTube too, why hate on a platform just because shitty people use either one of them? Beats giving money to YouTube, we have to start somewhere to decentralize more.
https://odysee.com/ – this one is also worth checking out, Louis Rossmann even posts there.
What about Biden signing in the new spying bill recently that enhances wiretapping of US citizens?
I agree, wish this was the actual goal but it’s going to be hard to pry those rights out of their hands.
It’s weird seeing comments that outline the actual problem getting downvoted here more than the superfluous comments that do not address the real problem at all. Bizarroworld.
Would you rather a hostile foreign entity do it instead, who have vested interest in sewing destructive chaos as a goal, though? That’s the alternative.
I’m still on the google prompt bandwagon of typing this query:
stuff i am searching for before:2023
… or ideally, even before COVID19, if you want more valuable, less tainted results. It’s only going to get worse from here, 2024 is the year of saturation with garbage data on the web (yes I know it was already bad before, but now AI is pumping this shit out at an industrial scale.)
Try phind.com, it’s got an insanely advanced model trained on a ton of their own proprietary code, and free too (or paid with more features and more prompts per day, etc.)
He should have installed neovim with LSPs for Python/Rust/etc for intellisense and linting to really get her all hot and bothered.
deleted by creator
I think it comes down to the tens of millions of dollars that the reddit executives sold out to. It’s easy to not care when someone is throwing $100 million at you. Also: fuck spez.
There’s probably even a ‘sentiment’ tracking system to automatically remove negative comments at this point.
Am I the only one in this thread who uses VSCode + GDB together? The inspection panes and ability to breakpoint and hover over variables to drill down in them is just great, seems like everyone should set up their own c_cpp_properties.json && tasks.json files and give it a try.
I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.
https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small
This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.
But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.
Most of the time you don’t have to insert the man page, it’s already baked into the neural network model and filling the context window sometimes gives worse results.