Lena@gregtech.eu to Programmer Humor@programming.devEnglish · 2 days ago"Source code file"gregtech.euexternal-linkmessage-square217fedilinkarrow-up1802arrow-down110
arrow-up1792arrow-down1external-link"Source code file"gregtech.euLena@gregtech.eu to Programmer Humor@programming.devEnglish · 2 days agomessage-square217fedilink
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up30·2 days agoHey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up26·2 days agoJust really fuck up this shit. I want it unrecognizable!
minus-squareZetta@mander.xyzlinkfedilinkarrow-up2·edit-223 hours agoPerfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly /s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
minus-squarehperrin@lemmy.calinkfedilinkEnglisharrow-up1·22 hours ago250,000 lines is way more than 250,000 tokens, so even that context is too small.
Hey Grok, take this one file out of the context of my 250,000 line project and give me that delicious AI slop!
Just really fuck up this shit. I want it unrecognizable!
Perfect, groks context limit is 256,000, and as we all know llm recal only gets better as context fills so you will get perfect slop that works amazingly
/s and more info on quality drop with context increase here https://github.com/NVIDIA/RULER
250,000 lines is way more than 250,000 tokens, so even that context is too small.