

No the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.
No the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.
Very fair comment. In my experience even increasing the temperature you get stuck in local minimums
I was just trying to illustrate how 70% failure rates can still be useful.
About 0.02
Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.
Think of AI as a hard working, arrogant, knowledgeable, unimaginative junior intern.
The vibe coding is great for small, self contained tasks. It doesn’t scale to a codebase (yet?).
Thanks. I wasn’t familiar with the abbreviation and of course Google offered no insights.
What’s “short” about the short-sightedness, though?
Gamepass attacks the developers, not the consumers.
The only part of the gamepass that is monopolistic is the friends network it creates.
Upgrading a computer is very different to adding a new sensor array all around the body.
I’m not saying upgrading older cars the only reason for excluding lidar, but I bet it was a large factor.
This statement works from anyone’s PoV.
Technical debt.
If you promise self driving on all cars, but cars already on the road don’t have lidar then no car has lidar.
I believe what you say. I don’t believe that is what the article is saying.
I’m not attacking philosophical arguments between the 1950s and the 1980s.
I’m pointing out that the claim that consciousness must form inside a fleshy body is not supported by any evidence.
There are so many fictional films referencing MKULTRA that people assume it is fiction.
The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.
Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.
It’s hard to see that books argument from the Wikipedia entry, but I don’t see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.
It’s just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn’t imply computers can’t gain consciousness, or that they need flesh and senses to do so.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.
This is not a good argument.
Can’t proved they have been deleted and can’t prove you don’t have more.
Problem created.
That looks better. Even with a fair coin, 10 heads in a row is almost impossible.
And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.