Final fantasy tactics Dragon quest monsters
Final fantasy tactics Dragon quest monsters
Deepseek v3 is the model in question
They’ve already started testing that at Google For ad enhancement and For immersive ads there’s no way they keep the chatting models pristine and ad-free
The dystopian future of “pay to use this miraculous product or it will shove advertisements down your throat in a way we know will work because we’ve trained it to sell specifically to you”
llama is good and I’m looking forward to trying deepseek 3, but the big issue is that those are the frontier open source models while 4o is no longer openai’s best performing model, they just dropped o3 (god they are literally as bad as microsoft at naming) which shows in benchmarks tremendous progress in reasoning
When running llama locally I appreciate the matched capabilities like structured output, but it is objectively significantly worse than openai’s models. I would like to support open source models and use them exclusively but dang it’s hard to give up the results
I suppose one way to start for me would be dropping cursor and copilot in favor of their open source equivalents, but switching my business to use llama is a hard pill to swallow
Imo velocity and user experience aren’t mutually exclusive, as a developer I can respond to user requests way faster with web technologies.
As a consumer vscode is a perfect example of why the ecosystem has value, are there other products that fill the same roles? Absolutely, but if you were around for the transition from bloodshed, codeblocks, eclipse and the like to sublime and vscode and other more modern editors you should remember how gamechanging the positive feedback loop of velocity achieved for the dev community in the form of user experience.
Nah electron is an excellent technology, v8 is a remarkable engine. Maybe something like tauri will unseat it eventually but the ability to spin up a new product in relatively short order is good for everyone. Ram and disk usage are higher than they would be with a native app but velocity is unparalleled
Man I love let’s encrypt, remember how terrible ssl was before the project landed?
Rocket chat I think checks those boxes
Guy shoulda tried emacs instead, wife is probably an elitist
I do this on my ultra, token speed is not great, depending on the model of course, a lot of source code sets are optimized for Nvidia and don’t even use native Mac gpu without modifying the code, defaulting to cpu. I’ve had to modify about half of what I run
Ymmv but I find it’s actually cheaper to just use a hosted service
If you want some specific numbers lmk