maybe it’s because i grew up with vhs first but dvd always felt like a lot of hassle compared to just “put it in and watch”
i mean, json is valid yaml
i don’t know if i would take this study as “knowledge”. that map of the us? it’s just a map of chargers, not of data from the study. reading the study, they were only measuring in one county. there’s no categorisation of the type of fast charger they measured, just “a variety”. the error bars overlap enough that this could all be errors. and why only measure at fast chargers and gas stations? why not at other high-power electrical systems like transformer yards in urban areas? they alno have fans, surely.
question is, why publish it if it is so obviously (and willfully) wrong?
oh absolutely, it’s fascinating to hear a perspective i didn’t know existed.
i ripped all my dvds specifically to get rid of the menus because they were slow, hard to use, and full of frustrating animations. they usually just felt like an afterthought.
i’ve never been one to be swayed by extras, it usually just feels akin to jingling keys to get me to buy shit. maybe i’m weird.
it’s called “distrohopping”, and yes. nowadays it’s easier to do it in a vm, but less fun
thankfully there is a -bin package
i’ve never heard of anyone that keeps dvd menus around. like, i get it for archival purposes but i would never want to actually navigate a menu when i want to watch something. in my mind it’s like sitting through the commercials on a rented vhs. i would probably store a converted copy as well, in a format that would let me specify from the application what track and subtitle i want so i can set a default.
not necessarily. image generation models work on a more fine-grained scale than that. they can seamlessly combine related concepts, like “photograph”+“person”+“small”+“pose” and generate plausible material due to the fact that all of those concepts have features in common.
you can also use small add-on models trained on very little data (tens to hundreds of images, as compared to millions to billions for a full model) to “steer” the output of a model towards a particular style.
you can make even a fully legal model output illegal data.
all that being said, the base dataset that most of the stable diffusion family of models started out with in 2021 is medical in nature so there could very well be bad shit in there. it’s like 12 billion images so it’s hard to check, and even back with stable diffusion 1.0 there was less than a single bit of data in the final model per image in the data.
oh this was a while ago, i currently don’t have a homelab. i gave up waiting for mods to update and then it slipped my mind.
do you use a premade compose file or did you write your own? i started out my own but it quickly got very complicated…
where’s the rest?
dlss isn’t frame gen, it’s supersampling
all AR glasses need cameras. that’s how they figure out where in the R to put the A.
idk, the conversion therapy ban got half a million in a week. consumer rights is less interesting but 100k a week seems doable
without spoiling the details, it’s a bit like groundhog day, or majoras mask.
i always encourage people to take it slow and drink in the world with ow, and that applies because of the “limit”. which isn’t a limit, you can play as long as you’d like. think of it more as a pomodoro timer. it’s also very well signposted.
same! turns out you can make it a lot easier for yourself by observation. for example, there are only two of them you actually need to manoeuvre around. also, that entire section takes three to five minutes, but you have like twelve, so it’s fine to take it slow. finally, you can mark your destination from the log to get its location.
maybe it’s reflective of the personality of the player. i can never get to bed at a reasonable hour and i’ve heard a theory that some people have that problem because the mind thinks that the sooner the next day begins the less time they have to themselves.
how many hours a day do you use a browser?