It’s the Swiss Army CHAINSAW!
It’s the Swiss Army CHAINSAW!
I imagine there’s code to do something like currency conversion or maybe rewards points calculation so the displayed amount is not actually the number used for the final total
I think I understand how it works.
Remember that LLMs are glorified auto-complete. They just spit out the most likely word that follows the previous words (literally just like how your phone keyboard suggestions work, just with a lot more computation).
They have a limit to how far back they can remember. For ChatGPT 3.5 I believe it’s 24,000 tokens.
So it tries to follow instruction and spits out “poem poem poem” until all the data is just the word “poem”, then it doesn’t have enough memory to remember its instructions.
“Poem poem poem” is useless data so it doesn’t have anything to go off of, so it just outputs words that go together.
LLMs don’t record data in the same way a computer file is stored, but absent other information may “remember” that the most likely word to follow the previous word is something that it has seen before, i.e. its training data. It is somewhat surprising that it is not just junk. It seems to be real text (such as bible verses).
If I am correct then I’m surprised OpenAI didn’t fix if. I would think they could make it so in the event the LLM is running out of memory it would keep the input and simply abort operation, or at least drop the beginning of its output.
Invalid USB-A male to USB-A male cables are also commonly used on low cost KVM switches.
The one I got from Amazon has two of them - one for each computer, then the other end of each cable connects to the switch. The switch has its own micro-USB power supply but it is optional, so the cables must pass power