I agree, unless it’s straight up paid software which I usually don’t mind paying for if it’s good and I need it. Although arguably uBlock Origin is so close to perfection that I can’t imagine how a paid ad blocker would hold up.
I agree, unless it’s straight up paid software which I usually don’t mind paying for if it’s good and I need it. Although arguably uBlock Origin is so close to perfection that I can’t imagine how a paid ad blocker would hold up.
So…Red Dead Redemption infringes two of these three patents?
Is Nintendo afraid because Rockstar can actually afford the lawsuit?
Flatpaks also just come with a set of default permissions at install time, so running in a sandbox only really protects against flaws in the software, but not against malicious intentions by its creator. Flatpak doesn’t have an “ask for permission” system afaik, at least not standardized. What you do is you add or subtract from the default the app itself specifies.
With the iPhone 14 no longer being sold the specs of the rumored SE 2025 make a lot more sense.
YouTube is by far the slowest website I visit, it’s so bloated.
Braiding doesn’t really increase the cable quality per se though…?
It’s $90 because it has fairly thick wiring and as Margot said is likely an active cable (with a chip in the plug). It’s actually fairly cheap considering the feature set.
I have this cable: https://www.spigen.com/products/arcwire-usb-c-to-usb-c-cable-pb2202
It’s 2 meters long, 240 watts and supports Thunderbolt 4/USB 4 (40 gbps).
I couldn’t test the 240 watts charging as I don’t have any device pulling more than 100 watts, but the Thunderbolt 4 part definitely works.
Apple sells a 3 meter Thunderbolt 4 cable (albeit limited to 100 watts of power) that isn’t optical either (I think there’s some special circuitry in the plugs though).
Just dd any ISOHybrid to an internal disk.
I’d actually be surprised if Apple pays anything to OpenAI at the moment. Obviously running some Siri requests through ChatGPT (after the user confirms that’s what they want to do) is quite expensive for OpenAI, but Apple Intelligence doesn’t touch OpenAI servers at all (just Siri has ChatGPT integration).
Even then, there’ll obviously still be a lot of requests, but the problem OpenAI has is that they aren’t really in a negotiating position. Google owns Android and so most phones default to Gemini, instantly giving them a huge advantage in marketshare. OpenAI doesn’t have its own platform, so Apple having the second largest install base of all smartphone operating systems is OpenAI’s best chance.
Apple might benefit from OpenAI but OpenAI needs Apple way more than the other way around. Apple Intelligence runs perfectly fine (I mean, as “perfectly fine” as it currently does) without OpenAI, the only functionality users would lose is the option to redirect “complex” Siri requests to ChatGPT.
In fact, I wouldn’t be surprised if OpenAI actually pays Apple for the integration, just like Google pays Apple a hefty sum to be the default search engine for Safari.
Apple Intelligence isn’t “powered by OpenAI” at all. It’s not even based on it.
The only time OpenAI servers are contacted is when you ask Siri something it can’t compute with Apple Intelligence, but even then it clearly asks the user first if they want to send the request to ChatGPT.
Everything else regarding Apple Intelligence runs either on-device or on their “Private Cloud Compute” infrastructure, which apparently uses M2 Ultra chips. You then have to trust Apple that their claims regarding privacy are true, but you kind of do that when choosing an iPhone in the first place. There’s some pretty interesting tech behind this actually.
I am SCP-426, your toaster.
“Whistleblower” because he had a negative opinion on his former company?
uBlock Origin filter or ClearURLs for example.
I actually fell asleep playing the original, so…
(not hating on the game, it literally happened though)
An expensive gadget that requires the cloud to function that is designed to manipulate young children into believing that this gadget is their “friend”.
How this is even legal is beyond me.
CUDA is a proprietary platform that (officially) only runs on Nvidia cards, so making projects that use CUDA run on non-Nvidia hardware is not trivial.
I don’t think the consumer-facing stuff can be called a monopoly per se, but Nvidia can easily force proprietary features onto the market (G-Sync before they adapted VESA Adaptive-Sync, DLSS etc.) because they have such a large market share.
Assume a scenario where Nvidia has 90% market share and Nvidia cards would still only support adaptive sync via their proprietary G-Sync solution. Display manufacturers will obviously want to tailor to the market, so most displays will release with support for G-Sync instead of VESA Adaptive-Sync. 9 out of 10 customers will likely buy a G-Sync display as they have Nvidia cards. Now everyone has a monitor supporting some form of adaptive sync. AMD and Nvidia release their new GPU generation and isolated (in this hypothetical scenario), AMD cards are 10% cheaper for the same performance and efficiency as their Nvidia counterparts. The problem for AMD here is that even though per $ they have the better cards, 9 out of 10 people would need new displays to get adaptive sync working with an AMD card (because their current display only supports the proprietary G-Sync), and AMD can’t possibly undercut Nvidia by so much that the customer can also buy a new display for the price difference. This results in 9 out of 10 customers going for Nvidia again.
To be fair to Nvidia, most of their proprietary features are somewhat innovative. When G-Sync first came out, VESA Adaptive-Sync wasn’t really a thing yet. DLSS was way better than any other upscaler in existence when it released and it required hardware that only Nvidia had.
But with CUDA, it’s a big problem. Entire software projects that just won’t (officially) run on non-Nvidia hardware so Nvidia is able to charge whatever they want (unless what they’re charging is more than the cost of switching to competitor products and importantly porting over the affected software projects).
I’m not sure how sustainable this model is. Especially when a reader browses via a link aggregator and therefore reads news articles on many different websites. I doubt most people want/can afford a subscription on dozens of different news outlets, as that’ll quickly add up to a triple-digit monthly bill.
Something like Flattr, but maybe non-optional, would be better. Pay a fixed monthly fee and split the payment between all sites you read articles on (maybe based on how many, or reading time or whatever).
I think most of VSCode performance improvements just stem from newer CPUs being faster.
Yo Gaben, are you donating your 30% share as well?
…when comparing TFLOPs, and that’s not comparable across architectures (by different companies as well!).
If we take similar-performing (in rasterization) Ampere and RDNA 2 cards (say a 3080 and 6800 XT), we can see the 3080 has 29.77 TFLOPs and the 6800 XT has 20.74 TFLOPs, an RDNA 2 FLOP is worth about 1.4x as much as an Ampere FLOP.
So extrapolating the 1.6 “RDNA 2 TFLOPs” of the Deck we get 2.24 “Ampere TFLOPs” and that’d make the Deck quite a bit faster than the Switch 2 in portable mode, but slower than the Switch 2 in docked mode.
This is obviously all just wild and silly speculation, but I doubt the Switch 2 will match the Deck in portable mode. Samsung 8nm would just eat too much power for this to realistically happen in a handheld form factor.