Don’t give them ideas
Don’t give them ideas
I like the gemma models bc of the phrasing they use and that they give sources sometimes. The best results though come from llama3 I think. Also openhermes and openchat, which perform well enough for my purposes.
In the beginning i had used microsoft phi, that wasn’t that good though.
Have a look at self hosted alternatives like Ollama in combination with Open-webui. It can be a hassle to set up, or even excruciatingly painful if you never touched a computer before, but it could be worth a try. I use it daily and like it much more than chatgpt to be honest.
Glad i could help ;)
You can get different results, sometimes better sometimes worse, most of the time differently phrased (e.g. the gemma models by google like to do bulletlists and sometimes tell me where they got that information from). There are models specifically trained / finetuned for different tasks (mostly coding, but also writing stories, answering medical questions, telling me what is on a picture, speaking different languages, running on smaller / bigger hardware, etc.). Have a look at ollamas library of models which is outright tiny compared to e.g. huggingface.
Also, i don’t trust OpenAI and others to be confidential with company data or explicit code snippets from work i feed them.
If you’re lucky you just set it to the wrong version, mine uses 10.3.0 (see below).
I tried running the docker container first as well but gave up since there are seperate versions for cuda and rocm which comes packaged with this as well and therefor gets unnecessary big.
I am running it on Fedora natively. I installed it with the setup script from the top of the docs:
curl -fsSL https://ollama.com/install.sh | sh
After that i created a service file (also stated in the linked docs) so that it starts at boot time (so i can just boot my pc and forget it without needing to login).
The crucial part for the GPU in question (RX 6700XT) was this line under the [service] section:
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
As you stated, this sets the environment variable for rocm. Also to be able to reach it from outside of localhost (for my server):
Environment="OLLAMA_HOST=0.0.0.0"
I have my gaming pc running as ollama host when i need it (RX 6700XT with rocm doing the heavy lifting). PC idles at ~50W and draws up to 200W when generating an answer. It is plenty fast though.
My mini pc home server is running openwebui with access to this “ollama instance” but also OpenAIs api when i just need a quick answer and therefor don’t turn on my pc.
It took that time before the update for me as well. 1-2 minutes for every loading screen. There was a mod for that (before the next-gen update) that you could not load via nexus, bc the setup was a little more conplex, but it worked really well (see https://www.nexusmods.com/fallout4/mods/10283 )
The method of this mod was to speed up the fps only while in the loading screen to 300-350 bc the loading times were somehow tied to the fps. Well done Bethesda.
That’s what I was thinking the whole time. I mean, it’s not that far of a guess.
Better noone tell apple.
Wouldn’t want to write a webserver / database connection / scheduler / etc. from scratch. Spring Boot plus lombok turns 2k lines of code into 100.
I hope it went well :) i was completely ready to go back changing the image tag to v2 but didn’t need to.
Kind of funny if you read it like that, and while it certainly doesn’t make them immortal, it at least may make them last a while longer i hope.
Vanced died because they tried to generate revenue from it and made themselves vulnerable.
Also, unlike Vanced, Revanced doesn’t distribute modded youtube apks themselves.
This is what current implementations like Revanced do. The endgame will be fullblown DRM. Until then, it will be a cat and mouse game.
Where does the …media get sourced from? Looks like pornhub gifs. You could think of a integration for some nsfw subreddits as well (if you can get past the api barrier they built up, like with redlib).
I like the idea of it and there were times i used it correctly, but most of the time i do it wrong i guess.
Let’s hope that this is not a productive system. I want to say that you’d have to try hard to do something that stupid, but then again, knowing from myself, you can cause a lot of trouble with a single command in a cli somewhere.
Im so looking forward to this. When i tried to use tmpfs / ramdisk, the transcoding would simply stop because there was no space left.
Where hot potato license?