• 0 Posts
  • 112 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
  • I work in IT. If someone came to ask me if they could install this on their system I’d tell them no, based only on this trailer. You’ve got to give us more info.

    I’m all for open source and open systems that can be built up as needed, but people like me would need information to make decisions. Unlike your typical corporate executive or manager, technical people aren’t as easily conned by hype videos. I’ve seen more information published from a game company that’s trying to hide spoilers. The only technical information I could spot was that neofetch like screen, so I know you’ve got something Unix-like.

    Also, if you’re going to be coy and post publicly but then send people on a treasure hunt, pick a less generic name or else you’re going to get lost on page 3 of Google. You list “Open Systems OS 1.0” on one slide, and that search for me returns OSF/1 (1990s), OpenKylin (a Chinese Linux), and classic Mac OS as the top results.

    I get that software development takes time, and good software development takes even longer, but if you don’t have the info it’s too young in the development cycle for a hype video. It also tells me that if you’re using semantic versioning you’re using it wrong, because v1.0 of semver implies to be your first stable API, which you either have and are hiding or don’t have so you shouldn’t be at 1.0.

    Even just one sentence “I am building a Unix-like operating system using a [custom|Linux|BSD] kernel which is designed to [fly model airplanes]” would be better than a void. That kind of sentence will get the right people interested in you project and asking the right questions. For example, if you’re about model airplanes or server hosting, I might be interested. But if you’re building an OS around someone who wants to use their computer like a VN, that’s not my cup of tea personally. You haven’t dis-proven the latter yet, though I assume it’s an unlikely occurrence.



  • You very much gloss over the whole “distribution” part. That is one of the main three segments of an electric grid (generation, transmission, distribution). Practical Engineering has some great content about how the grid works and addresses some of the problems renewables face in certain aspects iirc. I recommend giving it a watch or at least a background listen. His first video that is a good place to start, and the “which power plant does my electricity come from” with the lake analogy is also a good intro.

    https://youtube.com/playlist?list=PLTZM4MrZKfW-ftqKGSbO-DwDiOGqNmq53

    Having a DER system is great and all because the transmission system doesn’t have to be as highly loaded (thus increasing the total load a system can withstand), but you still need to be pretty connected for something like this to work - and like others have pointed out, that’s going to mean building a parallel grid (which the energy regulators won’t like if you get too big) or hooking into the existing grid (which probably already has DER management baked into the system if you contact your local power company).

    The grid works because it’s big. That’s a feature, not a bug. And because we have AC not DC on the wire, any energized and connected generator has to be in dead lockstep with the grid frequency or else your hardware is going to become a load, make expensive noises, emit magic smoke, or some combination thereof.

    One major edge case you have is night charging of EVs. Let’s say I’m a 9-5 office worker with a standard parking lot at my workplace. I’m just a keyboard monkey doing whatever, so I’m not a decision maker as to what goes in the parking lot infrastructure wise, so I’m at the mercy of whatever Facilities is doing, and gods know what that is. But I have a nice brand new EV, and I want to charge it. When I drive home after DST ends, it’s dark outside. There’s no solar to charge my car. Some renewables (like wind and hydro) work at night, but solar doesn’t. I’d need to charge an auxillary power storage system during the day, and then transfer that to my EV battery at night. That’s more complexity.

    Power storage of any kind of generation is a huge issue with many different solutions, and not all of them are batteries. And nothing is a perfect system, so there’s energy losses whenever we convert from type A to type B of whatever.

    Or… I could just hook my EV up to the grid where the cost of my bill per kilowatt hour includes systems and people to manage keeping the system on voltage and on frequency, 24/7/365.25.

    Any power produced during that day for a solar system that doesn’t get immediately used needs to be stored (because it HAS to get put somewhere or you literally break the grid or waste it). That energy storage - along with the voltage converters - is going to take up extra cubic footage in your system that won’t be small, and requires regular monitoring and maintenance to stay online. The system you’re proposing is going to create many fragments of the grid in the form of these pop up neighborhood charging stations entirely dependent on what resources are available in less than a mile radius.

    Even if you assume that you don’t have to frequency synchronize with the main grid and you’re fully isolated, you run into another big problem: local generation isn’t always perfect. Solar especially is very susceptible to the giant orb in the sky being around, so your local energy storage needs to account for being able to hold enough power for a certain percentage above your worst case cloudy day while maintaining the necessary output to sustain the local EVs depending on it. If you get a 2- or 3- day storm, I hope you have enough energy storage to have low daytime charge rates for 4- to 5- days. In the playlist, there’s also a video talking about using hydroelectric generators in reverse to store energy as physical potential energy in a reservoir as one example of how a grid might store excess energy.

    This is one thing the major grids are quite literally engineered and regulated to accomplish: because they are in fact so large, they can just import energy via the market system from somewhere with better weather or is slightly off-peak demand. And when one type of energy becomes less viable for a given weather condition (like solar on a cloudy day) they have a diversified generation portfolio of other sources: renewables like wind and hydro, nuclear energy for big orders, and even grid-scale energy storage system such as flywheels (fast stabilization), pumped water storage, and even giant batteries, and if all those fail, well yes we do still have dinosaurs to burn. (The world’s not perfect yet and we should by all means go for progress, but it will be a long road). And all these sources are already working together to keep the grids on voltage and on frequency, and have physical and managerial infrastructure to keep everything connected and synchronized such that supply and demand are balanced.



  • If you’re trying to do VDI in the cloud, that can get expensive fast on account of the GPU processing needed

    Most of the protocols I know of the run CPU-only (and I’m perfectly happy to be proven wrong and introduced to something new) tend to fray at high latency or high resolution. The usual top two I’ve seen are VNC and RDP (XRDP project on Linux), with NoMachine and plain x11 over SSH being right behind that. I think NoMachine had the best performance of those three, but it’s been a hot minute since I’ve personally used it. XRDP is the one I’ve used the most often, but getting login/lock/unlock working was fiddly at first but seems to be stable holding.

    Jumping from the “basic connection, maybe barely but not always suitable for video” to “ultra high grade high speed”, we have Parsec and Sunshine+Moonlight. Parsec is currently limited to only Windows/Mac hosting (with Linux client available), and both Parsec and Sunshine require or recommend a reasonable GPU to handle the encoding stage (although I believe Sunshine may support an X264 encoder which may exert a heavy CPU tax depending on your resolution). The specific problem of sourcing a GPU in the cloud (since you mention EC2) becomes the expensive part. This class of remote access tends to fray at high resolution and frame rate less because it’s designed to transport video and games, rather than taking shortcuts to get a minimum desktop visible.





  • I think you’re asking too much from ZFS. Ceph, Gluster, or some other form of cluster native filesystem (GFS, OCFS, Lustre, etc) would handle all of the replication/writes atomically in the background instead of having replication run as a post processor on top of an existing storage solution.

    You specifically mention a gap window - that gap window is not a bug, it’s a feature of using a replication timer, even if it’s based on an atomic snapshot. The only way to get around that gap is to use different tech. In this case, all of those above options have the ability to replicate data whenever the VM/CT makes a file I/O - and the workload won’t get a write acknowledgement until the replication has completed successfully. As far as the workload is concerned, the write just takes a few extra milliseconds compared to pure local storage (which many workloads don’t actually care about)

    I’ve personally been working on a project to convert my lab from ESXi vSAN to PVE+Ceph, and conversions like that (even a simpler one like PVE+ZFS to PVE+Ceph would require the target disk to be wiped at some point in the process.

    You could try temporarily storing your data on an external hard drive via USB, or if you can get your workloads into a quiet state or maintenance window, you could use the replication you already have and rebuild the disk (but not the PVE OS itself) one node at a time, and restore/migrate the workload to the new Ceph target as it’s completed.

    On paper, (I have not yet personally tested this), you could even take it a step farther: for all of your VMs that connect to the NFS share for their data, you could replace that NFS container (a single point of failure) with the cluster storage engine itself. There’s not a rule I know of that says you can’t. That way, your VM data is directly written to the engine at a lower latency than VM -> NFS -> ZFS/Ceph/etc



  • My server rack has

    • 3x Dell R730
    • 1x Dell R720
    • 2x Cisco Catalyst 3750x (IP Routing license)
    • 2x Netgear M4300-12x12f
    • 1x Unifi USW-48-Pro
    • 1x USW-Agg
    • 3x Framework 11th Gen (future cluster)
    • 1x Protectli FE4B

    All together that draws… 0.1 kWh… in 0.327s.

    In real time terms, measured at the UPS, I have a running stable state load of 900-1100w depending on what I have at load. I call it my computationally efficient space heater because it generates more heat than is required for my apartment in winter except for the coldest of days. It has a dedicated 120v 15A circuit









    • I never said anything about EFI not supporting multi boot. I said that the had to be kept in lockstep during updates. I recognize the term “manual” might have been a bit of a misnomer there, since I included systems where the admin has to take action to enable replication. ESXi (my main hardware OS for now) doesn’t even have software RAID for single-server datastores (only vSAN). Windows and Linux both can do it, but its a non-default manual process of splicing the drives together with no apparent automatic replacement mechanism - full manual admin intervention. With a hardware RAID, you just have to plop the new disk in and it splices the drive back into the array automatically (if the drive matches)
    • Dell and HPe both have had RAM caching for reads and writes since at least 2011. That’s why the controllers have batteries :)
      • also, I said it only had to handle the boot disk. Plus you’re ignoring the fact that all modern filesystems will do page caching in the background regardless of the presence of hardware cache. That’s not unique to ZFS, Windows and Linux both do it.
    • mdadm and hardware RAID offer the same level of block consistency validation to my current understanding- you’d need filesystem-level checksumming no matter what, and as both mdadm and hardware RAID are both filesystem agnostic, they will almost equally support the same filesystem-level features (Synology implements BTRFS on top of mdadm - I saw a small note somewhere that they had their implementation request block rebuild from mdadm if btrfs detected issues, but I have been unable to verify this claim so I do not consider it (yet) as part of my hardware vs md comparison)

    Hardware RAID just works, and for many, that’s good enough. In more advanced systems, all its got to handle is a boot partition, and if you’re doing your job as a sysadmin there’s zero important data in there that can’t be easily rebuilt or restored.


  • I never said I didn’t use software RAID, I just wanted to add information about hardware RAID controllers. Maybe I’m blind, but I’ve never seen a good implementation of software RAID for the EFI partition or boot sector. During boot, most systems I’ve seen will try to always access one partition directly and a second in order, which is bypassing the concept of a RAID, so the two would need to be kept manually in sync during updates.

    Because of that, there’s one notable place where I won’t - I always use hardware RAID for at minimum the boot disk because Dell firmware natively understands everything about it from a detect/boot/replace perspective. Or doesn’t see anything at all in a good way. All four of my primary servers have a boot disk on either a Startech RAID card similar to a Dell BOSS or have an array to boot off of directly on the PERC. It’s only enough space to store the core OS.

    Other than that, at home all my other physical devices are hypervisors (VMware ESXi for now until I can plot a migration), dedicated appliance devices (Synology DSM uses mdadm), or don’t have a redundant disks (my firewall - backed up to git, and my NUC Proxmox box, both firewalls and the PVE are all running ZFS for features).

    Three of my four ESXi servers run vSAN, which is like Ceph and replaces RAID. Like Ceph and ZFS, it requires using an HBA or passthrough disks for full performance. The last one is my standalone server. Notably, ESXi does not support any software RAID natively that isn’t vSAN, so both of the standalone server’s arrays are hardware RAID.

    When it comes time to replace that Synology it’s going to be on TrueNAS