• 0 Posts
  • 171 Comments
Joined 9 months ago
cake
Cake day: April 24th, 2024

help-circle
  • Run them in a sandboxed VM?

    VM escaping is not impossible, but its probably outside of the ability of most cracked games with malware.

    Even better; Go with a bare metal linux install, and then use a sandboxed VM.

    Even less malware is going to be able to VM escape and then also have any idea of what to do in a linux environment, purely because the vast, vast majority of exploits (I should say malware, not exploits per se) are designed to fuck up Windows.

    Is this perfectly safe?

    No, but nothing is.

    Any legitimately purchased game with closed source, kernel level anti cheat could be doing literally anything to your PC, and you wouldn’t know.



  • sp3ctr4l@lemmy.ziptoProgrammer Humor@lemmy.mlLearn to code
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    21 hours ago

    Yeah, I learned to code almost 20 years ago in order to mod video games, and learned that many bugs and massive problems in mods and games are caused by coders being either extremely lazy or making extremely dumb decisions.

    In general, a ginormous problem with basically all software is technical debt and spaghetti code making things roughly increase in inefficiency and unneccesarry, poorly documented complexity at the same rate as hardware advances in compute power.

    Basically nobody ever refactors anything, its just bandaids upon bandaids upon bandaids, because a refactor only makes sense in a 1 or 2 year + timeframe, but basically all corporations only exist in a next quarter timeframe.

    This Jack Forge guy is just, just starting to downslope from the peak of the dunning kruger graph of competence vs confidence.


  • Woops, Mutahar accidentally indirectly killed it by giving it coverage.

    He covered it about two weeks ago trepedatiously, thinking the project was awesome, but stating that Rockstar could kill the whole thing if they became aware of it…

    https://youtube.com/watch?v=jTrY6P1H53E

    And now its dead.

    “Due to the unexpected attention that our project received and after speaking with Rockstar Games, we have decided to take down the Liberty City Preservation Project.”

    The above linked almost half million view video is almost certainly the ‘unexpected attention our project recieved’, not the inline linked tweet IGN provided with less than 300 retweets.


  • Technically not… exactly a rogue like, but:

    Deus Ex w/ Randomizer Mod and PermaDeath.

    You can set it up fairly easily the Steam version of DX and the Revision Mod, which at this point is basically all the most popular DX mods, reconfigured to play nice with each other and be as mutually compatible as possible.

    Someone already mentioned Caves of Qud, that one is amazing, Noita is really good, also StarSector is functionally a roguelike but in space.

    Also No Mans Sky is basically a rogue like if you turn on permadeath, kick the difficulty up.



  • sp3ctr4l@lemmy.ziptoPC Gaming@lemmy.caRTX 50 series opinons?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    7 days ago

    I hate it I hate it I hate it.

    This AI hallucinated frame crap is bullshit.

    Their own demos show things like the game is running at 30ish fps, but we are hallucinacting that up to 240!

    Ok…great.

    I will give you that that is wonderful for games that do not really depend on split second timing / hit detection and/or just have a pause function as part of normal gameplay.

    Strategy games, 4x, city/colony builders, old school turn based RPGs… slow paced third person or first person games…

    Sure, its a genuine benefit in these kinds of games.

    But anything that does involve split second timing?

    Shooters? ARPGs? Fighting games?

    Are these just… all going to be designed around the idea that actually your input just has a delay?

    That you’ll now be unable to figure out if you missed a shot or got shot from a guy behind a wall… due to network lag, or your own client rendering just lied to you?

    I am all onboard with intelligent upscaling of frames.

    If you can render natively at 5 or 10 or 15 % of the actual frame you see, and then upscale those frames and result in an actually higher true FPS?

    Awesome.

    But not predictive frame gen.


  • They marketed the headset as being able to replace the functions of basically everything an average person uses a laptop/pc, cellphone, and tv for.

    People routinely use computers and tvs for many hours at a time.

    People routinely spend hours on their phone and basically always have them in their pocket or nearby.

    They showed people wearing the things in planes, to watch 2-3 hour movies.

    Sitting down in their (strangely TV-less) living rooms to watch 2-3 hour movies.

    Doing … some kind of work you’d do on a laptop, but easily being able to keep the things on, kick a ball around with your kid, and then seamlessly go back to working.

    Wearing the headset as you are unpacking at a hotel, and then taking a video phone call with them.

    Not the thing ringing, you putting the headset on, and then taking a call.

    No, you’re just already wearing the headset, having just arrived in a hotel, implying you just had them on as you took your luggage up to your motel, like a hat.

    https://youtube.com/watch?v=IY4x85zqoJM

    Taken as a montage, you certainly get the impression that you’re encouraged to just wear the thing all the time, anywhere, that its an ‘all-device’ that replaces a whole bunch of other devices, and is easily used/worn in many settings for long periods of time.




  • Here’s the quote, for people allergic to reading the update in the article.

    Update: Nvidia sent us a statement: “We are aware of a reported performance issue related to Game Filters and are actively looking into it. You can turn off Game Filters from the NVIDIA App Settings > Features > Overlay > Game Filters and Photo Mode, and then relaunch your game.”

    We have tested this and confirmed that disabling the Game Filters and Photo Mode does indeed work. The problem appears to stem from the filters causing a performance loss, even when they’re not being actively used. (With GeForce Experience, if you didn’t have any game filters enabled, it didn’t affect performance.) So, if you’re only after the video capture features or game optimizations offered by the Nvidia App, you can get ‘normal’ performance by disabling the filters and photo modes.

    So, TomsHW (is at least claiming that they) did indeed test this, and found that its the filters and photo mode causing the performance hit.

    Still a pretty stupid problem to have, considering the old filters did not cause this problem, but at least there’s a workaround.

    … I’m curious if this new settings app even exists, or has been tested on linux.


  • … I mean, in an academic sense, if you possess the ability to implement the method, sure you can make your own code and do this yourself on whatever hardware you want, train your own models, etc.

    But from a practical standpoint of an average computer hardware user, no, you I don’t think you can just use this method on any hardware you want with ease, you’ll be reliant on official drivers which just do not support / are not officially released for a whole ton of hardware.

    Not many average users are going to have the time or skillset required to write their own inplementations, train and tweak the AI models for every different game at every different resolution for whichever GPUs / NPUs etc the way massive corporations do.

    It’ll be a ready to go feature of various GPUs and NPUs and SoCs and whatever, designed and manufactured by Intel, reliant on drivers released by Intel, unless a giant Proton style opensource project happens, with tens or hundreds or thousands of people dedicates themselves to making this method work on whatever hardware.

    I think at one point someone tried to do something like this, figuring out how to hackily implement DLSS on AMD GPUs, but this seems to require compiling your own dlls, and is based off of such a random person’s implementation of DLSS, and is likely quite buggy and inefficient compared to an actual Nvidia GPU with official drivers.

    https://github.com/PotatoOfDoom/DLSS/tree/981fff8e86274ab1519ecb4c01d0540566f8a70e

    https://github.com/PotatoOfDoom/CyberFSR2

    https://docs.google.com/spreadsheets/d/1XyIoSqo6JQxrpdS9l5l_nZUPvFo1kUaU_Uc2DzsFlQw/htmlview

    Yeah, looks like a whole bunch of compatibility issues and complex operations for a ‘i just want play game’ end user to figure out.

    Also hey! Your last bit there about the second patent I listed seems to describe how they’re going to do the real time moderation between which frames are fully pipeline rendered and which ones are extrapolated: use the described GPU kernel operation to estimate pipeline frame rendering times along with a target FPS/refresh rate, do extrapolation whenever FPS won’t hit target FPS.

    … Which would mean that the practical upshot for an average end user is that if they’re not using a GPU architecture designed with this method in mind, the method isn’t going to work very well, which means this is not some kind of magic ‘holy grail’, universal software upgrade for all old hardware (I know you haven’t said this, but others in this thread have speculated at this)…

    And that means the average end user is still in a state of comparing cost vs performance/features of an increasingly architecture divergent selection of future GPUs/NPUs/SoCs/APUs.

    And also the overhead of doing the calculation of predicting pipeline render times vs extrapolated frame render times is not being figured in with this paper, meaning that the article based on the paper is at least to some extent overstating this method’s practical quickness to the general public.

    I think the disconnect we are having here is that I am coming at this from a ‘how does this actually impact your average gamer’ standpoint, and you are coming at it from much more academic standpoint, inclusive of all the things that are technically correct and possible, whereas I am focusing on how that universe of technically possible things is likely to condense into a practical reality for the vast majority of non experts.

    Maybe ‘propietary’ was not exactly the technically correct term to use.

    What is a single word that means ‘this method is a feature that is likely to only be officially, out of the box supported and used by specific Intel GPUs/NPUs etc until Nvidia and/or AMD decide to officially support it out of the box as well, and/or a comprehensive open source team dedicates themselves to maintaining easy to install drivers that add the same functionality to non officially supported hardware’?

    Either way, I do enjoy this discussion, and acknowledge that you seem to be more knowledgeable in the technicalities than myself.


  • The point of this method is that it takes less computations than going through the whole rendering pipeline, so it will always be able to render a frame faster than performing all the calculations unless we’re at extremes cases like very low resolution, very high fps, very slow GPU.

    I feel this is a bit of an overstatement, otherwise you’d only render the first frame of a game level and then just use this method to extrapolate every single subsequent frame.

    Realistically, the model has to return back to actually fully pipeline rendered frames from time to time to re-reference itself, otherwise you’d quickly end up with a lot of hallucination/artefacts, kind of an AI version of a shitty video codec that morphs into nonsense when its only generating partial new frames based on detected change from the previous frame.

    Its not clear at all, at least to me, in the paper alone, the average frequency, or under what conditions that reference frames are reffered back to… after watching the video as well, it seems they are running 24 second, 30 FPS scenes, and functionally doubling this to 60 FPS, by referring to some number of history frames to extrapolate half of the frames in the completed videos.

    So, that would be a 1:1 ratio of extrapolated frame to reference frame.

    This doesn’t appear to actually be working in a kind of real time, moderated tandem between real time pipeline rendering and frame extrapolation.

    It seems to just be running already captured videos as input, and then rendering double FPS videos as output.

    …But I could be wrong about that?

    I would love it if I missed this in the paper and you could point out to me where they describe in detail how they balance the ratio of, or conditions in which a reference frame is actually referred to… all I’m seeing is basically ‘we look at the history buffer.’

    Although you did mention these are only rough estimates, it is worth saying that these numbers are only relevant to this specific test and this specific GPU (RTX 4070 TI).

    Thats a good point, I missed that, and it’s worth mentioning they ran this on a 4070ti.

    I doubt you will ever run into a situation where you can go through the whole rendering pipeline before this model finishes running, except for the cases I listed above.

    Unfortunately they don’t actually list any baseline for frametimes generated through the normal rendering pipeline, would have been nice to see that as a sort of ‘control’ column where all the scores for the various ‘visual difference/error from standard fully rendered frames’ are all 0 or 100 or whatever, then we could compare some numbers of how much quality you lose for faster frames, at least on a 4070ti.

    If you control for a single given GPU then sure, other than edge cases, this method will almost always result in greater FPS for a slight degredstion in quality…

    …but there’s almost no way this method is not proprietary, and thus your choice will be between price comparing GPUs with their differing rendering capabilities, not something like ‘do i turn MSAA to 4x or 16x’, available on basically any GPU.

    More on that below.

    This can run on whatever you want that can do math (CPU, NPU, GPU), they simply chose a GPU. Plus it is widely known that CPUs are not as good as GPUs at running models, so it would be useless to run this on a CPU.

    Yes, this is why I said this is GPU tech, I did not figure that it needed to be stated that oh well ok yes technically you can run it locally on a CPU or NPU or APU but its only going to actually run well on something resbling a GPU.

    I was aiming at practical upshot for average computer user not comprehensive breakdown for hardware/software developers and extreme enthusiasts.

    Where did you get this information? This is an academic paper in the public domain. You are not only allowed, but encouraged to reproduce and iterate on the method that is described in the paper. Also, the experiment didn’t even use Intel hardware, it was NVIDIA GPU and AMD CPU.

    To be fair, when I wrote it originally, I used ‘apparently’ as a qualifier, indicating lack of 100% certainty.

    But uh, why did I assume this?

    Because most of the names on the paper list the company they are employed by, there is no freely available source code, and just generally corporate funded research is always made proprietary unless explicitly indicated otherwise.

    Much research done by Universities also ends up proprietary as well.

    This paper only describes the actual method being used for frame gen in relatively broad strokes, the meat of the paper is devoted to analyzing it’s comparative utility, not thoroughly discussing and outlining exact opcodes or w/e.

    Sure, you could try to implement this method based off of reading this paper, but that’s a far cry from ‘here’s our MIT liscensed alpha driver, go nuts.’

    …And, now that you bring it up:

    Intel filed what seem to me to be two different patent applications, almost 9 months before the paper we are discussing came out, with 2 out of 3 of the credited inventors on the patents also having their names on this paper, which are directly related to this academic publication.

    This one appears to be focused on the machine learning / frame gen method, the software:

    https://patents.justia.com/patent/20240311950

    And this one appears to be focused on the physical design of a GPU, the hardware made to leverage the software.

    https://patents.justia.com/patent/20240311951

    So yeah, looks to me like Intel is certainly aiming at this being proprietary.

    I suppose its technically possible they do not actually get these patents awardes to them, but I find that extremely unlikely.

    EDIT: Also, lol video game journalism processional standards strike again, whoever wrote the article here could have looked this up and added this highly relevant ‘Intel is pursuing a patent on this technology’ information to their article in maybe a grand total of 15 to 30 extra minutes, but nah, too hard I guess.


  • The paper includes the following chart for average frame gen times at various resolutions, in various test scenarios they compared with other frame generation methods.

    Here’s their new method’s frame gen times, averaged across all their scenarios.

    540p: 2.34ms

    720p: 3.66ms

    1080p: 6.62ms

    Converted to FPS, by assuming constant frametimes, thats about…

    540p: 427 FPS

    720p: 273 FPS

    1080p: 151 FPS

    Now lets try extrapolated pixels per frametime to guesstimate an efficiency factor:

    540p: 518400 px / 2.34 ms = 221538 px/ms

    720p: 921600 px / 3.66 ms = 251803 px/ms

    1080p: 2073600 px / 6.62 ms = 313233 px/ms

    Plugging pixels vs efficiency factor into a graphing system and using power curve best fit estimation, you get these efficiency factors for non listed resolutions:

    1440p: 361423 px/ms

    2160p: 443899 px/ms

    Which works out to roughly the following frame times:

    1440p: 10.20 ms

    2160p: 18.69 ms

    Or in FPS:

    1440p: 98 FPS

    2160p: 53 FPS

    … Now this is all extremely rough math, but the basic take away is that frame gen, even this faster and higher quality frame gen, which doesn’t introduce input lag in the way DLSS or FSR does, is only worth it if it can generate a frame faster than you could otherwise fully render it normally.

    (I want to again stress here this is very rough math, but I am ironically forced to extrapolate performance at higher resolutions, as no such info exists in the paper.)

    IE, if your rig is running 1080p at 240 FPS, 1440p at 120 FPS, or 4K at 60 FPS natively… this frame gen would be pointless.

    I… guess if this could actually somehow be implemented at a driver level, as an upgrade to existing hardware, that would be good.

    But … this is GPU tech.

    Which, like DLSS, requires extensive AI training sets.

    And is apparently proprietary to Intel… so it could only be rolled out on existing or new Intel GPUs (until or unless someone reverse engineers it for other GPUs) which basically everyone would have to buy new, as Intel only just started making GPUs.

    Its not gonna somehow be a driver/chipset upgrade to existing Intel CPUs.

    Basically this seems to be fundamental to Intel’s gambit to make its own new GPUs stand out. Build GPUs for less cost, with less hardware devoted to G Buffering, and use this frame gen method in lieu of that.

    It all depends on the price to performance ratio.



  • Similar stuff has happened to me.

    Here’s a rough template of an input questionaire in MSFT Forms, its not actually ready yet as we haven’t set up the actual place the inputs will be recorded, nor set up a way to mirror it into our actual database that our entire intranet uses.

    Come back after the weekend, dummy questionaire has replaced the front end of our old system, meaning we’ve just functionally not logged about 72 hours of requests for assistance from the homeless, during a blizzard, and COVID.

    After this, our webmaster / marketing director, this woman who earns a quarter mil a year… straight up told me, in an email, she does not actually read anything I write in my emails to her that requires scrolling.

    She’s very busy, you see.

    When she asked me, unprompted, in an in person meeting, if I could ‘implement the blockchain’ in our (PostGres, not that she knows what that is) database, for ‘security benefits’, I wanted to strangle her to death, but settled on collapsing my head into my hands, then looking up and saying no, that would make everything extremely inefficient and make it much more insecure.

    She says, oh really, are you sure about that?

    Yes.

    Ok then well I guess that wraps up this meeting (shit eating grin) keep up the good work!

    … I no longer work in the tech industry.




  • One tiny slipup in GPO and IT departments could end up with the most massive explicit data leak in history,…

    I get what you’re saying, but:

    Apply this same logic to ‘Considerable and substantial direct access to the kernel for who knows how many third party software engineers, without meaningful or comprehensive review of how they’re using that access.’

    Why, one serious, overlooked error on a widely used enterprise software with this kernel access could basically brick millions of business computers and cost god knows how many millions or billions of dollars, they’d never do that!

    cough CrowdStrike cough.