Holy shit, here comes an s!
Holy shit, here comes an s!
Full self driving should only be implemented when the system is good enough to completely take over all driving functions. It should only be available in vehicles without steering wheels. The Tesla solution of having “self driving” but relying on the copout of requiring constant user attention and feedback is ridiculous. Only when a system is truly capable of self-driving 100% autonomously, at a level statistically far better than a human, should any kind of self-driving be allowed on the road. Systems like Tesla’s FSD officially require you to always be ready to intervene at a moment’s notice. They know their system isn’t ready for independent use yet, so they require that manual input. But of course this encourages disengaged driving; no one actually pays attention to the road like they should, able to intervene at a moment’s notice. Tesla’s FSD imitates true self-driving, but it pawns off the liability do drivers by requiring them to pay attention at all times. This should be illegal. Beyond merely lane-assistance technology, no self-driving tech should be allowed except in vehicles without steering wheels. If your AI can’t truly perform better than a human, it’s better for humans to be the only ones actively driving the vehicle.
This also solves the civil liability problem. Tesla’s current system has a dubious liability structure designed to pawn liability off to the driver. But if there isn’t even a steering wheel in the car, then the liability must fall entirely on the vehicle manufacturer. They are after all 100% responsible for the algorithm that controls the vehicle, and you should ultimately have legal liability for the algorithms you create. Is your company not confident enough in its self-driving tech to assume full legal liability for the actions of your vehicles? No? Then your tech isn’t good enough yet. There can be a process for car companies to subcontract out the payment of legal claims against the company. They can hire State Farm or whoever to handle insurance claims against them. But ultimately, legal liability will fall on the company.
This also avoids criminal liability. If you only allow full self-driving in vehicles without steering wheels, there is zero doubt about who is control of the car. There isn’t a driver anymore, only passengers. Even if you’re a person sitting in the seat that would normally be a driver’s seat, it doesn’t matter. You are just a passenger legally. You can be as tired, distracted, drunk, or high as you like, you’re not getting any criminal liability for driving the vehicle. There is such a clear bright line - there is literally no steering wheel - that it is absolutely undeniable that you have zero control over the vehicle.
This actually would work under the same theory of existing drunk-driving law. People can get ticketed for drunk driving for sleeping in their cars. Even if the cops never see you driving, you can get charged for drunk driving if they find you in a position where you could drunk drive. So if you have your keys on you while sleeping drunk in a parked car, you can get charged with DD. But not having a steering wheel at all would be the equivalent of not having the keys to a vehicle - you are literally incapable of operating it. And if you are not capable of operating it, you cannot be criminally liable for any crime relating to its operation.
I think we should indict Sam Altman on two sets of charges:
A set of securities fraud charges.
8 billion counts of criminal reckless endangerment.
He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?
So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.
I think those just need to move to have their own independent sites instead of basing their operations on social media. Ultimately what they’re doing is entirely legal, but it’s way too easy for some asshat billionaire to pull some strings to get them pulled from a platform.
“What is he trying to hide‽” I dunno, man. Maybe he recognizes that there’s a bunch of unhinged weirdos who are hellbent on stalking “Satoshi,” and he doesn’t want to be harassed?
Forget being harassed. Honestly, being kidnapped is a serious concern. Whoever or whatever group Satoshi is, it’s estimated he, she, or they own something like a million bitcoins.
Kidnapping is normally a pretty poor choice of crime for a criminal gang to undertake. It had its heyday back in the early 20th century. But as the FBI really got going, and we got better at tracking down people across state lines and internationally, kidnapping became much more difficult to pull off. Kidnapping someone - physically abducting them - is the easy part. But actually sending their family a ransom letter and collecting the money in a way that can’t be traced back to you? That’s a whole different matter. Actually getting the ransom money and somehow getting it into a form you can spend, all without getting caught? That’s nearly impossible in this day and age.
But someone with a million Bitcoins? It’s entirely possible that everything needed to access those funds is entirely within that one person’s skull. Either the private keys themselves, or some way to access or generate them.
Someone with that amount of Bitcoins is actually at incredible risk for kidnapping by an organized crime outfit. We’re talking about $65 billion USD worth of assets that can be obtained by just kidnapping one person and torturing them until they give up their private keys. Then once you have them, the coins can be transferred to another account and washed through numerous transactions until they’re untraceable. And the poor bastard who gets kidnapped for this just never leaves their captors alive.
And even if they keep their keys in their home instead of in their head? Now they’re at risk of break-in, or being held hostage during a nighttime break-in.
Hell, even just being suspected of being Satoshi would be incredibly dangerous. That’s an even more horrifying scenario. Imagine an organized crime outfit thinks you’re Satoshi, they’re incorrect, and they abduct you and torture you, demanding you give them something you are simply incapable of providing…
The iPhone remote locator function still works when the phone is powered off. It doesn’t work when the battery is completely dead, but it does work when the phone is supposedly “powered off.” This is irrefutable proof that iPhones at least retain some of their functions even when you’ve “turned them off.”
You sure it’s still not phoning home? How do you know “off” is really “off” anymore with a modern phone? It’s not like an old flip phone that you can just pop the battery out. Sure it sounds paranoid, but we’re literally talking about something that used to be the realm of crackpots and cranks - “the government is tracking all of us 24/7!” Well, it seems that’s actually literally the case now.
Wouldn’t just keeping your phone in a metal box prevent it from communicating with anything? Keep your phone in a metal box and only take it out when you need it. Only take it out in a location that isn’t sensitive. Or hell, just make a little sleeve out of aluminum foil. Literally just wrapping your phone in aluminum foil should prevent it from connecting to anything. A tinfoil hat won’t serve as an effective Faraday cage for your brain, but fully wrapping your phone in aluminum foil should do the job. Even better, as it’s a phone, such a foil sleeve should be quite testable. Build it, put your phone in it, and try texting and calling it. If surrounded fully by a conductive material, the phone should be completely incapable of sending or receiving signals.
The solution is to subscribe to these services. Then create a website that offers real-time tracking information, freely to the public, of the most wealthy and powerful people in the country. Every Congressperson should have their location shown freely available to all in real time. You could call it “wheresmyrep.org” or similar. Literally all of them tracked like animals in real time, freely shown for any and all to see. Let them live in the fish bowl they’ve created for us all.
“Alright you chucklefuckers. Here’s the new law. You are required to have paper tags, the only discount you can offer is paper coupons sent through the mail to everyone in an area, and you’re never allowed to alter your prices more than once per week.”
His “first principles” logic is that humans don’t use lidar therefore self driving should be able to be accomplished without (expensive) enhanced vision tools.
This kind of idiocy is why people tried to build airplanes with flapping wings. Way too many people thought that the best way to create a plane was to just copy what nature did with birds. Nature showed it was possible, so just copy nature.
Something you should keep in mind is that being a monopoly is not illegal, and it never has been. If you make a great widget and, through honest competition, corner that widget market, that’s perfectly legal.
What ISN’T legal is using your market power to engage in anti-competitive behavior. It’s not illegal for Apple to dominate the phone market. It is likely illegal for Apple to use its dominance of the phone market to prohibit competing app stores from being installed on their phones. That is Apple operating in two distinct businesses - a phone manufacturer and a software retailer. Apple is using its market dominance as a phone manufacturer to gain an unfair advantage as a software retailer.
This is a pretty damning violation of federal antitrust law.
I say we indict Sam Altman for both securities fraud and 8 billion counts of reckless endangerment. Him and other AI boosters are running around shouting that AGI is just around the corner, OpenAI is creating it, and that there is a very good chance we won’t be able to control it and that it will kill us all. Well, the way I see it, there are only two possibilities:
He’s right. In which case, OpenAI is literally endangering all of humanity by its very operation. In that case, the logical thing to do would be for the rest of us to arrest everyone at OpenAI, shove them in deep hole and never let them see the light of day again, and burn all their research and work to ashes. When someone says, “superintelligent AI cannot be stopped!” I say, “you sure about that? Because it’s humans that are making it. And humans aren’t bullet-proof.”
He’s lying. This is much more likely. In that case, he is guilty of fraud. He’s falsely making claims his company has no ability to achieve, and he is taking in billions in investor money based on these lies.
He’s either a conman, or a man so dangerous he should literally be thrown in the darkest hole we can find for the rest of his life.
And no, I REALLY don’t buy the argument that if the tech allows it, that superintelligent AI is just some inevitable thing we can’t choose to stop. The proposed methods to create it all rely on giant data centers that consume gigawatts of energy to run. You’re not hiding that kind of infrastructure. If it turns out superintelligence really is possible, we pass a global treaty to ban it, and simply shoot anyone that attempts to create it. I’m sorry, but if you legitimately are threatening the survival of the entire species, I have zero qualms about putting you in the ground. We don’t let people build nuclear reactors in their basement. And if this tech really is that capable and that dangerous, it should be regulated as strongly as nuclear weapons. If OpenAI really is trying to build a super-AGI, they should be treated no differently than a terrorist group attempting to build their own nuclear weapon.
But anyway, I say we just indict him on both charges. Charge Sam Altman with both securities fraud and 8 billion counts of reckless endangerment. Let the courts figure out which one he is guilty of, because it’s definitely one or the other.
Meanwhile, in a dark and forgotten corner of my PC, I STILL have several thousand MP3s I downloaded from Kazaa back in the day.
Bezos also has a rocket company. Plus there’s Richard Branson. And others.. And then you have private jet travel, massive mega yachts, and countless other extravagances. For a certain class of billionaire, having a private rocket company is a vanity project. These rocket companies are vanity projects by rich sci fi nerds. Yes, they’ve done some really good technical work, but they’re only possible because their founders were willing to sink billions into them even without any proof they’ll make a profit.
What you are missing is that as people’s wealth increases, their resource use just keeps going up and up and up. To the point where when people are wealthy enough, they’re using orders of magnitude more energy and resources than the average citizen of even developed countries. Billionaires have enough wealth that they can fly rockets just because they think they’re cool, even if they have no real path to profitability.
And no, the hypothetical of the robot skyscrapers is not “meaningless.” You just have a poor imagination. To have that type of world we only need one thing - a robot that can build a copy of itself from raw materials, or a series of robots that can collectively reproduce themselves from raw materials gathered in the environment. Once you have self-replicating robots, it becomes very easy to scale up to that kind of consumption on a broad scale. If you have self-replicating robots, the only real limit to the total number you can have on the planet is the total amount of sunlight available to power all of them.
The real point isn’t the specific examples I gave. The point, which you are missing entirely, is that total resource use is a function of wealth and technological capability. Raw population has very little impact on it. If our automation gets a lot better, or something else makes us much wealthier, we would see vast increases in total resource use even if our population was cut in half.
The problem is too many people. If standard of living is to increase then the resource requirement is due to massive unsustainable population growth.
They’re both important. And crucially, people in developed countries use a lot more resources than those in undeveloped countries. Just look at the resource utilization of our richest people. We have billionaires operating private rocket companies! If somehow, say due to really really good automation, orbital rockets could be made cheap enough for the average person to afford, we would have average middle class people regularly launching rockets into space and taking private trips to the Moon. Just staggering levels of resource use. If we could build and maintain homes very cheaply due to advanced robotics, the average person would live in a private skyscraper if they could afford it. Imagine the average suburban lot, except with a tower built on it 100 stories tall. If it was cheap enough to build and maintain that sort of thing, that absolutely would become the norm.
They’ve already tried to send all the jobs they can to India or South America. It ultimately didn’t work. They can send some, but the language and cultural barriers, plus the difficulty of assessing quality candidates just doesn’t make it viable at scale. They’ve already tried that game and it failed. Everything that can be outsourced to India already has been outsourced to India.
So…mostly 18-24 year olds?
As a matter of course, one should not even open a link that goes to OpenAI.
It’s best not to become dependent on these piracy engines. These models are hopelessly unprofitable, and they will not be cheap and accessible for very long. They take such colossal resources to train, billions upon billions of dollars. Currently OpenAI is trying to do the classic Silicon Valley bait and switch. They have a product that is more expensive and inefficient than the previous method. If they charge the real price for their product, they know no one will adopt it. So instead they offer their product at an artificially low price initially. They hope that everyone will become dependent, after which they can jack up their prices.
It’s the Uber model. Start by paying drivers more than they would make driving taxis, and by charging riders far less than they would pay for a taxi fare. This is possible through billions of angel investor subsidies. Then once everyone is dependent, slash driver pay and jack up ride prices. This is the only way for Uber to make back the billions they’ve squandered on market capture sub Silicon Valley execute bloat. If we had functioning anti-monopoly law enforcement, the executives of all these companies would be in jail. But for now they’re able to take advantage of practices that would have seen them in chains two generations ago.
Same with OpenAI. They want to get all the copy-editing companies dependent on their piracy engines. They want all the graphic design companies dependent on their image stealing tools. Then, once these companies fire their real human copy editors and graphic designers, OpenAI will start charging the real price for its services. And considering the literal hundreds of billions being poured into these hopelessly inefficient piracy engines, the rate they will have to charge will be enormous. Someone has to ultimately pay for those billions Sam Altman is sponging up. And even if they didn’t have billions of investor dollars to recoup, their ultimate goal is to gain a monopoly position in the copy editing and graphic design market. They will replace a million competing copy editors and graphing designers with a single provider - OpenAI. They’ll control the market. Once all the real human copy editors, graphic artists, and voice actors/readers have been driven from the industry and been forced to move on and take jobs elsewhere, they will be able to charge whatever they please.
Any executive that lets their company become dependent on this technology is a fool. They’re a sucker, falling for a classic bait-and-switch. Hopefully enough of them are smart enough not to be suckered in by the OpenAI con job, and OpenAI can hastily be driven into bankruptcy where it belongs.
Timberborn! I do love those beavers.