I unfortunately don’t know the specific names of the models, I will comment additionally if I will not forget to ask people who spun up the models themselves.
The difference might be that live vs recorded stuff, I don’t know.
I unfortunately don’t know the specific names of the models, I will comment additionally if I will not forget to ask people who spun up the models themselves.
The difference might be that live vs recorded stuff, I don’t know.
Exactly. My wife is a teacher and she runs Arch daily, knowing only how to run yay.
And if so, I can’t for the life of me Invision how it’s harder on Arch than on the Ubuntu or its derivatives.
Relax, they didn’t write a new way of doing magic, they integrated a solution from the market.
I don’t know what the new BMW car they introduce this year is capable of, but I know for a fact it can’t fly.
There is a hard limitation on LLM, it doesn’t and by definition can not have a criteria for truth, and unless something completely new emerges, it will never replace a junior, really. Some managers can be convinced that it did, but that will be a lie and the company that believes it will suffer.
It can transform some junior jobs for sure, some people might need to relearn some practices, there will probably be some shift in some methods, but unless something fundamentally new will appear, there is no way LLM will meaningfully replace meaningful amount of people
What do you mean by maintaining?
The technology is nowhere near being good though. On synthetic tests, on the data it was trained and tweeked on, maybe, I don’t know.
I corun an event when we invite speakers from all over the world, and we tried every way to generate subtitles, all of them run on the level of YouTube autogenerated ones. It’s better than nothing, but you can’t rely on it really.
Cinnamon is so bad, even Ubuntu got rid of it, and that’s saying something.
Fter the first year usually they stop
After finishing Tactical Breach Wisards, I have decided to revisit the whole defenestration trilogy and now I can’t stop playing Heat Signature.
It is a solution for underage gambling, but adult gambling is also a problem
The problem here is that a lot of the time looking for hidden problem is harder than writing good code from scratch. And you will always be at a danger that llm snuck some sneaky undefined behaviour past you. There is a whole plethora of standards, conventions, and good practices that help humans to avoid it, which llm can ignore at any random point.
So you’re either not spending enough time on review or missing whole lot of bullshit. In my experience, in my field, right now, this review time is more time consuming and more painful than avoiding it in the first place.
Don’t underestimate how degrading and energy sucking it is for a professional to spend most of the working time sitting through autogenerated garbage, and how inefficient it is.
A technology that makes people put bad code is a problematic technology. If your team/project managed to overcome it’s problems so far doesn’t mean it is good or overall helpful. Peoole not seeing the problem is actually the worst part.
I quit my previous job in part because I couldn’t deal with the influx of terrible, unreliable, dangerous, bloated, nonsensical, not even working code that was suddenly pushed into one of the projects I was working on. That project is now completely dead, they froze it on some arbitrary version.
When junior dev makes a mistake, you can explain it to them and they will not make it again. When they use llm to make a mistake, there is nothing to explain to anyone.
I compare this shake more to an earthquake than to anything positive you can associate with shaking.
LLMs are great at cutting through noise
Even that is not true. It doesn’t have aforementioned criteria for truth, you can’t make it have one.
LLMs are great at generating noise that humans have hard time distinguishing from a text. Nothing else. There are indeed applications for it, but due to human nature, people think that since the text looks like something coherent, information contained will also be reliable, which is very, very dangerous.
You do have this issue, you can’t not have this issue, your LLM, no matter how big the model is and how much tooling you use, does not have criteria for truth. The fact that you made this invisible for you is worse, so much worse.
And some of those citations and quotes will be completely false and randomly generated, but they will sound very believable, so you don’t know truth from random fiction until you check every single one of them. At which point you should ask yourself why did you add unneccessary step of burning small portion of the rainforest to ask random word generator for stuff, when you could not do that and look for sources directly, saving that much time and energy
But now all of the internet got incorporated into a magic 8-ball and when it gives you it’s random bullshit, you don’t know is it quoting anon from 4chan or a scientific paper or a journal or random assortment of words. And you don’t have any way to check it in confines of the system
Thd fuck do you mean without telling? I am very explicitly telling you that I don’t use them, and I’m very openly telling you that you also shouldn’t
When you do live streaming there is no time for backup, it either works or not. Better than nothing, that’s for sure, but also maybe marginally better than whatever we had 10 years ago