

It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
Developer and refugee from Reddit
It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
The thing is, if they just pared those claims down a bit, they’d be accurate. Switch from “Copilot can build an entire application for you from scratch while giving you a blowjob” to “Copilot can help developers by automating some repetitive and time-consuming tasks,” and you still have a good thing.
Weird. The cuts apparently include cancellation of several games that were planned and many of them will hit the Xbox division.
I would’ve thought that the increased productivity that Copilot theoretically gives developers would have resulted in the reduced staff still being able to finish those games.
Autocomplete on steroids, but suffering dementia.
Yep. My TV has not and never will be on the Internet in any way. I picked it for its screen quality, and the fact that it also has “smart” components never even entered into the decision. Because those smart components will literally never do anything.
Seems like it’s cheaper and more efficient just to pay people to fuck on camera.
Oh my God… The best/worst thing about the idea of AI porn is how AI tends to forget anything that isn’t still on the screen. So now I’m imagining the camera zooming in on someone’s jibblies, then zooming out and now it’s someone else’s jibblies, and the background is completely different.
The trick to using an AI agent effectively is already knowing exactly what you want, typing the request out in excruciating detail, and being a good developer who properly reviews code so you catch all the errors and repetition the AI agent will absolutely include.
So… Yeah. 100% agree. AI agents are useful, but impossible to use if you aren’t already skilled with code.
Well, technically, yes. You’re right. But they’re a specific, narrow type of neural network, while I was thinking of the broader class and more traditional applications, like data analysis. I should have been more specific.
That’s only part of the problem. Yes, JavaScript is a fragmented clusterfuck. Typescript is leagues better, but by no means perfect. Still, that doesn’t explain why the LLM can’t recall that I’m using Yarn while it’s processing the instruction that specifically told it to use Yarn. Or why it tries to start editing code when I tell it not to. Those are still issues that aren’t specific to the language.
But it still manages to fuck it up.
I’ve been experimenting with using Claude’s Sonnet model in Copilot in agent mode for my job, and one of the things that’s become abundantly clear is that it has certain types of behavior that are heavily represented in the model, so it assumes you want that behavior even if you explicitly tell it you don’t.
Say you’re working in a yarn workspaces project, and you instruct Copilot to build and test a new dashboard using an instruction file. You’ll need to include explicit and repeated reminders all throughout the file to use yarn, not NPM, because even though yarn is very popular today, there are so many older examples of using NPM in its model that it’s just going to assume that’s what you actually want - thereby fucking up your codebase.
I’ve also had lots of cases where I tell it I don’t want it to edit any code, just to analyze and explain something that’s there and how to update it… and then I have to stop it from editing code anyway, because halfway through it forgot that I didn’t want edits, just explanations.
I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy “dataset” that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don’t think we’re anywhere near there yet.
I would argue that without consistent and enforced type hinting, dynamically typed languages offer very little benefit from type-checking at runtime. And with consistent, enforced type hinting, they might as well be considered actual statically typed languages.
Don’t get me wrong, that’s a good thing. Properly configured Python development environments basically give you both, even if I’m not a fan of the syntax.
Hasn’t been updated since 2018. Does it still work?
Oh, I know you can, but it’s optional and the syntax is kind of weird. I prefer languages that are strongly typed from the ground up and enforce it.
Python is easy, but it can also be infuriating. Every time I use it, I’m reminded how much I loathe the use of whitespace to define blocks, and I really miss the straightforward type annotations of strong, non-dynamically typed languages.
There’s inevitably going to be some rebounding from this. It’s probably true that the large language models these companies are betting their businesses on can do some of the things entry-level grads do, but we’ve already seen several of them fail because their MBAs didn’t realize that just barfing out code is only one part of what developers do.
Source: Am developer, currently working with LLMs and related tech, none of which would be able to get anywhere without someone like me doing the work.
It’s… Well, it’s 100% fraud.
I use Opera on a Mac for all my Bing-based Edge-recommending needs.
Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.