Raspberry pi4 Docker:- gluetun(qBit, prowlarr, flaresolverr), tailscale(jellyfin, jellyseerr, mealie), rad/read/sonarr, pi-hole, unbound, portainer, watchtower.

Raspberry pi3 Docker:- pi-hole, unbound, portainer.

  • 2 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle
  • If nothing else, thank you for informing me about hide.me.

    My, personal, inability to intelligently compare VPNs is what’s holding me back from port forwarding.

    I’ve been trying to parse what I mean and failing:

    A VPN should be at least affordable for me, no point looking if I can’t afford it.

    It should be suitably secure, again no point looking if I don’t trust them to give/sell/surrender my data.

    It should be suitably fast, no point looking if it’s slower than dialup.

    And, it should have a minimum of features: port forwarding, easy to set up, etc.

    Where the minimums are is subjective but I think these are the things that each of us consider. Price, privacy, performance and feature set.

    Comparisons are either really good for “here’s the cheapest, here’s the most private and here’s the fastest” but neglect whether they’re P2P friendly or allow port forwarding. Or, the comparisons are really detailed on the feature set “max handshake encryption, max data encryption” but neglect how much I might pay.

    It’s a whole lot of research for something I know I don’t/won’t understand and with potentially huge consequences should I get wrong. So: “Here’s the most private” I’ll take that one please

    I’m currently on Mullvad, it topped a bunch of vpn comparison for ‘normals’ on security, and I have been content with them. But I’m ready to move up when my sub ends. Testimonials are just about all I’ve got.

    Edit: I suppose it’s ‘mid-level’ guides I think are missing. Beginners have their cheap/secure/fast articles. Advanced users can compare on “max handshake encryption” whatever that means. I need a “so you want to effectively and securely support the swarm, here are your options.”


  • I’m the “it works for me” normie. At least right now, accepting a 10% failure rate comes at the benefit of a decrease in complexity, and thus an increase in security (on account of me being less likely to fuck it up). This is an attractive proposition.

    When I was beginning TRaSH-Guides was the scripture and on their port forwarding page (for qbit) they only mention Torguard. On the torguard page they quote:

    As of 13 March 2022 Torguard Settles Piracy Lawsuit and has agreed to use commercially reasonable efforts to block BitTorrent traffic on its servers in the US using firewall technology. ‼

    I Talked to several people and they are still able to use Torguard for Torrents, Perhaps because the connection is encrypted. And others just selected a server in another country.

    “Torguard settles piracy lawsuit” is scary for normies, at least it was for me when I was setting this all up. So I went with Mullvad who actively do not want to know who I am. I’m a UK resident so my entire Linux ISO stack, is under Gluetun.

    Generally the documentation around port forwarding, “who to use, how and how much they cost” was hard to find, difficult to follow, or out of date. Perhaps I should look again though.

    Ideally, I’d want a “you want budget use X £#pcm, you want privacy use Y £#pcm and if you want speed use Z £#pcm” article with the guides for getting X,Y and Zworking in the style of TRaSH. I get that takes time and effort, but I think that’s what it would take for mass adoption. Advanced users can debate the minutiae elsewhere on best vpn client combo, advanced users are building seed boxes. Normies need 3 meaningful choices (price, privacy, speed) and hand holding to the finish line.


  • Fedegenerate@lemmynsfw.comtoSelfhosted@lemmy.worldMini pc arriving tomorrow
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Op I was you 12 months ago. +1 installing proxmox. The ability to make mistakes in an LXCs and always having the nightly back up right there was worth it alone. Helper scripts get you close to where you want to go fast. As for guides, there’s a bunch, raid owl, technotim both have initial proxmox setup guides. There are many like them, just two I remember.

    It might just be me, I struggled with every step of every guide I followed, mostly because I skip to copy paste the commands… Don’t do that. Chatgpt, plug the command in there and start quizzing it: “what does this do, what are the flags doing, I want to do x will command work”. Then don’t copy chatgpt either, take its output back to the documentation and make sure it makes sense. Then take a snapshot. Then paste the thing. It at least forced me to slow down.

    In the beginning I was about a month, just on a pi, getting a pihole and a servarr installed and configured. Then I nuked it and rebuilt in a couple weeks. Then I messed up again and rebuilt in a couple days. I dedicate 1hr to try fix what I broke using Chatgpt as mentor/rubber duck, if I can’t make progress on a fix in that time I load the snapshot. Troubleshooting is a great skill, however, everything you need gets installed at least once, so get good at installing things. Back ups need testing and you should be familiar with the process, get good at recovering from back ups. Chatgpt solves most of the problems surface level problems. You’ll get to a point when you get stuck chatgpt won’t be any help either, but let gpt get you there quickly.

    I genuinely prefer Dockge to Portainer, learn Portainer. As a rule learn the industry standard then migrate. Tonnes of articles and resources for Portainer, almost everyone using Dockge can help you with Portainer, not the other way around. The only difference is when the non-industry standard is specifically made to solve problems you have with the IS, I went with nginx proxy manager over nginx for example. GUIs are nice and I can see things working, unlike pasting a massive config and hoping. Now I have huge compose.yaml stacks for docker that I used to install one by one in Portainer.

    Security is hard. Outsource all you can. Your ISP firewall is perfectly serviceable don’t punch holes in it (for now). Tailscale is perfectly serviceable don’t try make your own tunnels (for now). One of my earliest posts was me installing a firewall on my pi, separate from the my router, and then going into a blind panic about punching holes in my firewall. Funny to look back on, my isp firewall is still completely intact, I picked a different path.

    Each iteration add one layer of complexity and take easy wins for everything else. I set up pihole bare metal, messed up the unbound install, go again. I used docker starter to set up pihole+unbound, messed up [something]… go again… Prioritise “working” over “perfect”. You don’t know what perfect is anyway. I don’t know what perfect is, but just getting something working teaches me what would be better for next go around. If what you did is “wrong” it’s going to break sooner rather than later so you get to go again. If what you did works forever be happy and enjoy the thing you built.

    Oh I forgot. No big updates right before bed, before a big event or when you’re out of the house. I once had an auto updater [watch tower] go off and delete my access to the internet [pihole] before downloading the new image, on my fiancée’s first day off, and while I was at work. I learned a lot about redundancy for essential infrastructure to Facebook that day, rightly so. If you can’t/won’t want to fix broken things right then, don’t be doing stuff that might break things.



  • I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.

    I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.

    Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.


  • On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.

    For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.

    For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.

    If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.





  • Get a domain and set about moving over to HTTPS with Let’s encrypt and Nginx.

    Learn to write an Nginx config. NPM just works so good though.

    Fix my permission issues. I have my media zpool on 777 so all the LXCs work and I have to run Libation in a VM as root. I’ve been banging my head against this on and off for a while.

    Figure out why paperless isn’t saving to the correct place. Also, figure out where Paperless is saving to.

    Containerise Libation.

    I give friends and family access to my server via a relay, just a raspberry pi 0 with Tailscale, pihole and nginx on it. I have reasons for going this route. Anyways, get a couple more of those into the wild. Also streamline the process somewhat.

    Learn to and create an ACL config for tailscale so I can have services access nothing, users access services, and admins access everything.











  • On mobile so you’ll have to forgive format jank.

    It depends how each image handles ports if C1 has the ports set up as 1234:100 and C2 has the ports set up as 1234:500 then:

    service:

    gluetun:

    ports:

     - 1234:100 #c1
     - 1235:500 #c2
    

    […]

    Will solve the conflict

    Sometimes an image will allow you to edit it’s internal ports with an environment so

    service:

    gluetun:

    ports:

      - 1234:1000 #c1
      -1235:1234 #c2
    

    c1:

    environent:

    - UI_PORT=1000
    

    […]

    When both contsiners use the same second number, C1: 1234:80, C21235:80, and neither documents suggest how to change that port, I personally haven’t found a way to resolve that conflict.