• 0 Posts
  • 340 Comments
Joined 2 years ago
cake
Cake day: August 27th, 2023

help-circle


  • This absolutism is jarring against your suggestion of applying the same technology to medicine.

    Siri predates this architecture by a decade. And you still want to write off the whole thing as literally useless if it’s ever ever ever wrong… because god forbid you have to glance at whatever e-mail it points to. Like skimming one e-mail to confirm it’s from your mom, about a flight, and mentions the time… is harder than combing through your inbox by hand.

    Confirming an answer is a lot easier than finding it from scratch. And if you’re late to the airport anyway, oh no, how terrible. Everything is ruined forever. Burn your computers and live in the woods, apparently, because one important e-mail was skipped. Your mother had to call you and then wait comfortably for an entire hour.

    Perfect reliability does not exist. No technology provides it. Even with your prior example, phone alarms - I’ve told Android to re-use the last timer, when I said I wanted twenty minutes, and it didn’t go off until 6:35 PM, because yesterday I said that at 6:15. I’ve had physical analog alarm clocks fail to go off in the morning. I did not abandon the concept of time, following that betrayal.

    The world did not end because a machine fucked up.


  • Your example of catastrophic failure is… e-mail? Spam filters are wrong all the time, and they’re still fantastic. Glancing in the folder for rare exceptions is cognitively easier than categorizing every single thing one-by-one.

    If there’s one false negative, you don’t go “Holy shit, it’s the actual prince of Nigeria!”

    But sure, let’s apply flawed models somewhere safe, like analyzing medical data. What?

    And it doesn’t matter if it gets it wrong one time in a hundred, that one time is enough to completely negate all potential positives of the feature.

    Obviously fucking not.

    Even in car safety, a literal life-and-death context, a camera that beeps when you’re about to screw up can catch plenty of times where you might guess wrong. Yeah - if you straight-up do not look, and blindly trust the beepy camera, bad things will happen. That’s why you have the camera and look.

    If a single fuckup renders the whole thing worthless, I have terrible news about human programmers.


  • If you want something more complex than an alarm clock, this does kinda work for anything. Emphasis on “kinda.”

    Neural networks are universal approximators. People get hung-up on the approximation part, like that cancels out the potential in… universal. You can make a model that does any damn thing. Only recently has that seriously meant you and can - backpropagation works, and it works on video-game hardware.

    what is currently branded as AI

    “AI is whatever hasn’t been done yet” has been the punchline for decades. For any advancement in the field, people only notice once you tell them it’s related to AI, and then they just call it “AI,” and later complain that it’s not like on Star Trek.

    And yet it moves. Each advancement makes new things possible, and old things better. Being right most of the time is good, actually. 100% would be better than 99%, but the 100% version does not exist, so 99% is better than never.

    Telling the grifters where to shove it should not condemn the cool shit they’re lying about.


  • I’ve done no such thing.

    I called it half-decent, spooky, and admirable.

    That turns out to be good enough, for a bunch of applications. Even the parts that are just a chatbot fooling people are useful. And massively better than the era you’re comparing this to.

    We have to deal with this honestly. Neural networks have officially caught on, and anything with examples can be approximated. Anything. The hard part is reminding people what “approximated” means. Being wrong sometimes is normal. Humans are wrong about all kinds of stuff. But for some reason, people think computers bring unflinching perfection - and approach life-or-death scenarios with this sloppy magic.

    Personally I’m excited for position tracking with accelerometers. Naively integrating into velocity and location immediately sends you to outer space. Clever filtering almost sorta kinda works. But it’s a complex noisy problem, with a minimal output, where approximate answers get partial credit. So long as it’s tuned for walking around versus riding a missile, it should Just Work.

    Similarly restrained use-cases will do minor witchcraft on a pittance of electricity. It’s not like matrix math is hard, for computers. LLMs just try to do as much of it as possible.


  • The complaints then were the same as complaints now

    Despite results improving at an insane rate, very recently. And you think this is proof of a problem with… the results? Not the complaints?

    People went “I made this!” with fucking Terragen. A program that renders wild alien landscapes which became generic after about the fifth one you saw. The problem there is not expertise. It’s immense quantity for zero effort. None of that proves CGI in general is worthless non-art. It’s just shifting what the computer will do for free.

    At some point, we will take it for granted that text-to-speech can do an admirable job reading out whatever. It’ll be a button you push when you’re busy sometimes. The dipshits mass-uploading that for popular articles, over stock footage, will be as relevant as people posting seven thousand alien sunsets.


  • We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has silenced a wide variety of naysaying.

    And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

    Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks at [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.




  • Self-learned programming, started building stuff on my own, and then went through an actual computer science program.

    Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I’d never heard of modulo. Or functions.

    Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It’s not even “compileable pseudocode.” It’s high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to “implement MP4 encoding” and expect that to Just Work.

    But it’s teaching people to write the comments first.

    we’re nowhere close to that right now.

    The distance from here to “oh shit” is shorter than we’d prefer. This tech works like a joke. “Chain of thought” apparently means telling the robot to act smarter… and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn’t look like Hatsune Miku. If it’s stupid, but it works… it’s still stupid. But it works.

    Someone’s gonna prompt “Write like Donald Knuth” and the robot’s gonna go, “Oh, you wanted good code? Why didn’t you say so.”


  • An otherwise meh article concluded with “It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.”

    Much as we want to point and laugh - this is not some loon’s fantasy. This is happening. Some dingus told spicy autocomplete ‘make me a database!’ and it did. It’s surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan’s law.

    This tech is borderline miraculous, even if it’s primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.