If you have to verify children’s identity, you have to verify everyone’s identity. This is part of KOSA. https://www.eff.org/deeplinks/2024/12/kids-online-safety-act-continues-threaten-our-rights-online-year-review-2024
If you have to verify children’s identity, you have to verify everyone’s identity. This is part of KOSA. https://www.eff.org/deeplinks/2024/12/kids-online-safety-act-continues-threaten-our-rights-online-year-review-2024
Oldest I got is limited to 16GB (excluding rPis). My main desktop is limited to 32GB which is annoying, because I sometimes need more. But, I have a home server with 128GB of RAM that I can use when it’s not doing other stuff. I once needed more than 128GB of RAM (to run optimizations on a large ONNX model, iirc), so had to spin up an EC2 instance with 512GB of RAM.
I just use Joplin, encrypted, and synced through dropbox. Tried logseq, but never really figured out how to use its features effectively. The notebook/note model of Joplin seems more natural to me. My coding/scripting stuff mostly just goes into git repos.
The PC I’m using as a little NAS usually draws around 75 watt. My jellyfin and general home server draws about 50 watt while idle but can jump up to 150 watt. Most of the components are very old. I know I could get the power usage down significantly by using newer components, but not sure if the electricity use outweighs the cost of sending them to the landfill and creating demand for more newer components to be manufactured.
CK2 - 400h
Fallout NV (guessing most of this has been TTW) - 190h
Stellaris - 180h
Xcom 2 - 140h
GTA 5 - 99h
Cities Skylines - 95h
Skyrim - 90h
Civ5 - 85h
Xcom - 83h
The other games I’ve played are pretty much the standard play-through times. (< 70h)
I’m loading up on vacuum tubes.
Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.
I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. “Give me the first 50 lines from Romeo and Juliette.” (albeit with a TOS warning, lol). “Give me the first few paragraphs of Dune.” seems to hit a guardrail, or maybe just forced through reinforcement learning.
A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model’s output, showing the “unsafe” data is still in there.
I think TikTok appeased the right by changing their algorithm. Charlie Kirk is apparently doing extremely well on the platform now.
You can also just install the libreelec os (Kodi), and install the Jellyfin Kodi addon. Haven’t tried that addon. I used to use Kodi when I had only one TV, and liked it. Now that I have 2 Android TVs, just installing Jellyfin on the TVs works fine. I might go back to rPIs and disconnect my TVs from the internet though.
I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.
Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.
Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.
Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.
My friend just hooks his laptop up to his TV, connects to his VPN, and plays popcorntime (streaming torrents). He used to use streaming sites, but those have been getting taken down left and right.
I remember liking Opposing Force and Blue Shift too.
I haven’t checked it out in years. From my understanding, IPFS aims to be a distributed filesystem that kinda works like Bittorent. If you access a file, you then seed it. Last time I checked it out, the project was jumping on the crypto bandwagon… Just checked out their website now, and don’t know WTF it is.
Meh, startups and businesses are capitalist organizations, and I think the idea of patents is questionable outside capitalism, so these wouldn’t really be a good metrics. I’d guess the richest countries “innovate” the most because they can support more risky endeavors. The U.S. is the capitalist imperial core, so it probably innovates the most. Other capitalist nations like Haiti, probably not so much.
The best measure of innovation would probably be something like scientific publications. China wins by raw numbers, Vatican City wins per-capita (???).
Thought this was a Republican ad for a second. Similar language and appeal to fear and ignorance.
Yann LeCun would probably be a better source. He does actual research (unlike Altman), and I’ve never seen him over-hype or fear monger (unlike Altman).
This is more complicated than some corporate infrastructures I’ve worked on, lol.
I’m curious if ByteDance could just create a new legal entity and call it TikTak or something.