“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
I puked after “I literally lost my only friend”. How far did you get?
Honestly the more I read, the more I think that AI shouldn’t have been invented. Humanity is heading in the same direction as the machine stops short story.
Never use AI for friendship, it’s like admitting you only want yes-men in your life. I don’t want to be around anyone who uses AI for emotional support.
It’s so much more effective when you keep things as neutral as possible. I will often ask it to tear apart my argument as though I am my opponent and use its tendency to align with the user against itself.
A fellow contrarian I see. I actually hate when it agrees with me so I look for holes.
You would have better luck with a dating sim then AI as emotional support. Might inspire you to make a real friend.
Its disturbing to see how many people have created emotional connections to a word generstor.
Imaginary friends used to require atleast some modicum of creativity.
Right? If you told someone from the past that we outsourced imagination to computers, they‘d think we live in a dystopia! Oh, wait…
We’re all word generators
We’re far more than that. We are having a conversation, transmitting our thoughts through space and time. It’s like telepathy, really. Word salad machines could never pull that off.
And we have emotions, not fake ones
I smell a finance bubble bursting in the near future tbh. Rather be prepared sooner than later.
Right around March 2026
I think it will break sooner, but the real impact will happen after midterms so it can be “fake news” without threatening 🍊 Cheetolini‘s presidency
I don’t expect to see midterms in 2026.
Depends on whether they can declare martial law. It‘s likely, but can‘t happen too unprovoked because this would give figures like Newsom too much ammo against Trump. Project 2025 doesn‘t care though
Boil the ocean a few more times to discover 1+1=3.
All that money that could be spent improving the lives of poor people in need.
“will spend trillions of dollars on data centers” Hurray!
It’s not enough that the planet is dying. They’re speeding it up as well!
You know… if this wasn’t an “AI specific blunder” we’d probably spend some time talking about how uniquely incompetent Altman has been as a someone running a business.
Nah, it’s good that they ripped off that bandaid. Parasocial AI relationships are terrible.
Happy cake day!
I wonder if Piefed users have pie days
Lemmy should steal piedays.
The worst part is that they backstepped a bit and made it “friendlier”.
Basically undoing that part.
Well, you sell what people want, even if it’s bad for them
its between codependant relationship and parasocial relationship of celebrities/public figures which is the extreme end, because usually its ends with stalking, or death threats.
Stop it. Get some help.
“we fucked up our massive new generation product launch… oh well lets invest trillions in new data centers” How do investors keep falling for this shit.
Don’t they have enough?!? How about they fix and optimize their fancy autocompletion software instead?
They took a path they believed would develop into something, and it’s a narrow alley they can’t turn around in. They have to keep going with more compute and power to continue the chase. Thing is, everyone else seemingly thought they were onto something and followed as well, so they’re all in the same predicament where reversing course is suicide. So they hope they can keep selling the dream a bit longer until something happens.
To be fair, it’s a lot more than just autocomplete. But it’s a lot less than what they wanted by now too.
vibe innovation, they are the ones that think AI will be innovative in science by spontaneous generating of new science discoveries, without “researchers, labs, papers”
I have seen some people talk like that, and it strikes me as a religion. There’s euphoria, zeal, hope. To them AGI is coming to usher in heaven on earth. Singularity is like rupture.
Sam Altman is one of the preachers of this religion.
Don’t they have enough?!?
No no, it’s just 1 more data center bro, then we’ll fix the hallucinations, promise bro!
Fix and optimize? Thats way harder than using VC money to buy more things.
It’s a pretty clear humble-brag, no? The launch was only botched because people loved the previous personality; it’s an estimate of how much people care about the product and how much price gouging they could do later.
No it wasn’t good for OpenAI. But I doubt it changed many investor minds.
people were addicted to the AI relationship it allowed.
How indeed. It’s probably a multi-factor phenomenon which requires an anthropological study for a serious answer. (Good luck trying to get the necessary access to study them.) My guess for one factor in this, is that they have more money than they know what to do with.
The american stock market is purely vibe driven now
How do investors keep falling for this shit.
The ROI and the supposed savings from getting rid of the human side of technical support but also efforts of human creatives.
He’s saying the launch was done badly because some users are in love with GPT-4 and it should not be removed. From a point of view of a investor having people addicted to your product is a good thing.
Because they already know that once the AI shitbubble bursts, they will switch all the GPUs to start mining Bitcoin and keep grifting the mouth breathers believing all these horseshit.
moving back to CRYPTO after it already crashed, and only people investing in it are the ones that are easily scammed; conservatives,old people.
Fugazi
Just a few more bucks bro! I swear then it will be the revolutionary “AI” we promised it to be.
*Few more billion.
I sometimes wonder if silicon valley tech businesses in general will take a reputation hit with investors when this bubble bursts, it’s gonna be a doozy.
But then I remember how many greedy idiots there are out there pumping money into grifts in the hope of The Big Win, and my expectations of consequences are tempered.
I think it’s driven by the investors. In the case of big tech, the large institutional investors are rewarding companies any time they say “AI” and lay off workers. In the case of startups, VCs are almost exclusively investing in startups that use “AI,” and have a lean or offshore workforce.
“I literally lost my only friend overnight with no warning,” one person posted on Reddit
It was meant to be satirical at the time, but maybe Futurama wasn’t entirely off the mark. That Redditor isn’t quite at that level, but it’s still probably not healthy to form an emotional attachment to the Markov chain equivalent of a sycophantic yes-man.
I’m honestly surprised your’s is not the top comment. Like, whatever, the launch was bad, but there is a serious mental health crisis if people are forming emotional bonds to the software.
Humans emotionally bond pretty easily, no? Like, we have folks attached to roombas, spiders, TV shows, and stuffed animals. Having a hard time thinking of anything X that I don’t personally know a person Y with Y emotionally engaged with X. Maybe taxes and concrete?
Yeah, agreed. It is concerning, but it’s hard to take all those comments too literally without actually knowing what’s going on with them.
That being said, there is a huge loneliness problem that’s been growing among pretty much every single developed country (and I’m sure it’s going on in developing countries, too, it’s just less studied/documented). Turns out, getting everyone addicted to looking at screens all day every day probably isn’t so healthy for social development.
However, just to be devil’s advocate: Are we certain social health was even great before modern tech? Or were these issues equally present but just undiagnosed/not studied/talked about?
I think we have sufficient data to say that social health is at least very different now. See the our-world-in-data topic page. In particular, one-person households have doubled.
Okay hold up. If you can get attached to a cat you can get attached to a spider. Getting attached to an AI is weird I agree but when you give a lil jumping spider water and it gets comfortable around you an just starts hanging out… There something behind those eyes, and that’s cool. Two living beings recognizing each other, maybe not as equals obviously, but outside of the predator-prey dynamic. Idk there’s beauty in that.
It’s a human trait. Hell, we’ll even emotionally bond with a volleyball given circumstances.
I can fully understand? The average human, from my perspective and lived experience, is garbage to his contemporaries; and one is never safe from being hurt, neither from family or friends. Some people have been hurt more than others - i can fully understand the need for exchange with someone/something that genuinely doesn’t want to hurt you and that is (at least seemingly) more sapient than a pet.
Markov chain equivalent of a sycophantic yes-man.
not only that, but one that is fully owned and operated by a business that could change it any time they want, or even cease to exist completely.
This isn’t like a game where you could run your own server if you’re a big enough fan. if chatgpt stops existing in its current form that’s it.
sure but you can absolutely run c.ai instances locally. 4o and it’s cross chat memory was probably more useful to these individuals though.
I didn’t say you can’t run any LLM on your own, but not any LLM will do. The point is they are attached to a specific version of a LLM that is not locally hostable. c.ai wouldn’t interest them any more than chatgpt 5 does.
There’s an entire active subreddit for people who have a “romantic relationship” with AI. It’s terrifying.
Don’t their partners kind of die each time a new chat is made?
LLMs do seem to be able to store the chats and work with the old material in new conversations, requiring an account of course. Idk, I haven’t personally used any of them that extensively.
i was going to mention it, they were having a meltdown when altman made the new version available. granted some of them are probably AI posts themselves or trolls.
I haven’t been to reddit in months, but I do need a laugh…
[Edit] Wow that sure didn’t disappoint. Or, it did but in the exact hilarious way I expected.
I wouldn’t laugh. Those people fulfill a basic human need in a way they feel safe with - probably because this safety is missing from their life. It’s not healthy to be so attached to LLMs, but to become so attached they must feel pretty isolated. And LLM’s are a lot more interactive and responsive than Severus Snape, and he had lots of women “channeling” him.
I visited /r/myboyfriendisai and it was not funny.
It was genuinely fucked up on so many levels.
they had incel subs, so im not surprised/.
After reading about the ELIZA effect, I both learned how people are super susceptible to this, and just need to remember the core tenants of it to avoid getting affected:
Honestly, that should have been for the better. If it’s meant to be a tool, I would much rather it behave like a tool, rather than trying to be my best friend, or an evil vizier trying to give me advice.
The fact that people got so attached to what is essentially a text generation algorithm that they were mourning its “death” is worrying, especially when it’s one that OpenAI has proven themselves to be more than able to modify as they wish.
Just as concerning is OpenAI rolling back the update to make their model “friendlier”, or that people were clamouring hand over fist to throw money at the company in the hopes of getting their “friend” back.
That can’t possibly be good news, especially when the shareholders find out that they have an iron grip over a portion of their users.
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.
Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING
Thinking model can realize their prediction doesn’t make sense if they really know nothing to an extent but yea, it’s not always accurate
Wouldn’t it make sense for an ai to provide a confidence level though?
I’ve got 3 million bits of info on this topic but only 4 of them lead to this solution. Confidence level =1.5%
It doesn’t have “3 million bits of info” on a specific topic, or even if it did, it wouldn’t be able to directly measure it. It’s worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you’re new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.
You could do this with logprobs. The language model itself has basically no real insight into its confidence but there’s more that you can get out of the model besides just the text.
The problem is that those probabilities are really “how confident are you that this text should come next in this conversation” not “how confident are you that this text is true/accurate.” It’s a fundamental limitation at the moment I think.
I think I read the RLHF kind of makes these logprobs completely unusable too.
It doesn’t store bits of information. All it has are neurons that form a weighted network
Got it do there is nothing resembling context. Thx.
Well, the conversation you had previously with it is sent, that’s the only real stored memory it has
It’s always funny to me when people do add ‘confidence scores’ to LLMs, because it always amounts to just adding ‘say how confident you are with low, medium or high in your response’ to th prompt, and then you have made up confidences for made up replies. And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…
And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…
That doesn’t justify flat out making shit up to everyone else, though. If a client is told information is made up but they use it anyway, that’s on the client. Although I’d argue that an LLM shouldn’t be in the business of making shit up unless specifically instructed to do so by the client.
I’m not really sure I follow.
Just to be clear, I’m not justifying anything, and I’m not involved in those projects. But the examples I know concern LLMs customized/fine-tuned for clients for specific projects (so not used by others), and those clients asking to have confidence scores, people on our side saying that it’s possible but that it wouldn’t actually say anything about actual confidence/certainty, since the models don’t have any confidence metric beyond “how likely is the next token given these previous tokens” and the clients going “that’s fine, we want it anyways”.
And if you ask me, LLMs shouldn’t be used for any of the stuff it’s used for there. It just cracks me up when the solution to “the lying machine is lying to me” is to ask the lying machine how much it’s lying. And when you tell them “it’ll lie about that too” they go “yeah, ok, that’s fine”.
And making shit up is the whole functionality of LLMs, there’s nothing there other than that. It just can make shit up pretty well sometimes.
It doesn’t admit anything, it’s a language machine
And depending on how OpenAI tweaked it this time it will either realize its mistake after being made aware of it or double down even harder on it.
I only use it for coding and it once told me my code not working was due to a bug in Webkit, so I asked it which bug specifically. It created links to bug reports but rewrote the titles of them. So initially it looked like it had numerous sources that backed up its statement but when I clicked on them those were bugs about totally different things.
It would not back down even after I specifically told it “You just made all of this shit up and even rewrote the titles” and got stuck in a loop of “I’m sorry, but you’re wrong and I am 100% sure I haven’t made a mistake”.
Kinda creepy. Especially when you think about the system rewriting reality when it comes to much more important things. Let’s just reinvent some history, that would be a good idea, right?
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
It’s literally a guess machine …
It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.
So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.
It’s pretty much the same shit that sales people do when they’re put on the spot.
It is a system that outputs an answer that is the most probably correct one from what it processes from the inputs. It does not have the concept of creating a lie. It is just a probability machine.
It wouldnt finish a lyric for me yesterday because it was copyrighted. I sid it was public domain and it said “You are absolutely right, given its release date it is under copyright protection”
Wtf
yeah, there are guardrails but for copyright, not for bullshit. ig they think copyrighted content is worse than bullshit.
From a legal standpoint, yes. Look at trump
In the end it’s a word generator that has been trained so much it uses facts often enough to be convincing. That’s its basic architecture.
You can ask it to give a confidence level to have an indication of how sure it is of the answer.
It’s a feature. Not a bug of LLMs.
It’s neither. It’s a design flaw. They’re not designed to be able to handle this type of situation correctly
You out there spreading misinformation, saying they’re a manipulation tool. No, they were never invented for this.
Llm is just next word prediction. The Ai doesn’t know whether the output is correct or not. If it’s wrong or right. Or fact or a lie.
So no I’m not spreading misinformation. The only thing that might spread misinformation is the AI here.
Saying it’s a “feature” makes it seem like it was intended which is clearly not true.
Yeah. That was kinda a joke. It’s not a bug but a feature is a common slogan under software developers.
That’s actually one thing that got significantly improved with GPT-5, fewer hallucinations. Still not perfect of course
I’m more inclined to believe it’s gotten better at being convincing.
Did you try it though?
Someone I know (not close enough to even call an “internet friend”) formed a sadistic bond with chatGPT and will force it to apologize and admit being stupid or something like that when he didn’t get the answer he’s looking for.
I guess that’s better than doing it to a person I suppose.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.