• Kazumara@discuss.tchncs.de
    link
    fedilink
    arrow-up
    20
    arrow-down
    3
    ·
    15 hours ago

    Funny how many here took this to be real, judging from the reactions. To me it’s an obvious joke.

    Question to you guys: How do you suppose 200 million customers will share the less than 65’536 ports that are available on that one address?

    • Fred@programming.dev
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      13 hours ago

      As @shane@feddit.nl says, you can use the same public port for many different destination address, vendors may call it something like “port overloading”.

      More importantly, you can install a large pool of public address on your CGNAT. For instance if you install a /20 pool, work with a 100 users / public address multiplexing, you can have 400,000 users on that CGNAT. 100 users / address is a comfortable ratio that will not affect most users. 1000 users / address would be pushing it, but I’m sure some ISP will try it.

      If you search for “CGNAT datasheet” for products you can deploy today, the first couple of results:

      • Kazumara@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        12 hours ago

        As @shane@feddit.nl says, you can use the same public port for many different destination address, vendors may call it something like “port overloading”.

        I just responded to him on that point, while you were typing to me. I didn’t know this existed, thanks for pointing it out!

        More importantly, you can install a large pool of public address on your CGNAT. For instance if you install a /20 pool, work with a 100 users / public address multiplexing, you can have 400,000 users on that CGNAT. 100 users / address is a comfortable ratio that will not affect most users. 1000 users / address would be pushing it, but I’m sure some ISP will try it.

        Sure, yeah, I have seen a few threads on NANOG about the NAT address ratios people are using. I also think I remember someone saying he was forced to use 1000 and it kind of worked as long as he pulled the heaviest users out of the pool. But if I recall correctly he was also saying he made IPv6 available in parallel to reduce the CGNAT load.

        But the point that made this post ridiculous and an obvious joke is that it said “one address” :-)

        • Fred@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          12 hours ago

          Well the “one address” bit sure :) but given the scale supported by CGNAT systems today, I don’t think being able to support an entire country behind a single cluster is that far off. At which point the difficulty becomes “is the 100.64.0.0/10 block big enough”? Or maybe they’re using DS-lite for the hauling from private network to the NAT.

    • shane@feddit.nl
      link
      fedilink
      arrow-up
      5
      ·
      13 hours ago

      A TCP session is a unique combination of client IP, client port, server IP, and server port.

      So you can use the same IP and port as long as the destination is a different IP or port.

      This means that in principle you could use the same IP and port to connect to every IP address on the Internet using 65536 concurrent sessions. 😆

      This wouldn’t help going to popular destinations, since they have a lot of people going to the same IP address and port, but for many (most?) of them you probably have some sort of CDN servers in your data centers anyway.

      • Kazumara@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        13 hours ago

        A TCP session is a unique combination of client IP, client port, server IP, and server port. So you can use the same IP and port as long as the destination is a different IP or port.

        Fair point! I wasn’t aware of any NAT working that way, but they could exist, I agree. It does blow up the session table a bit, but we are taking about a hell of a large theoretical system here anyway, so it’s not impossible.

        This wouldn’t help going to popular destinations, since they have a lot of people going to the same IP address and port, but for many (most?) of them you probably have some sort of CDN servers in your data centers anyway.

        Actually we have recently seen a few content providers not upgrading their cache servers and instead preferring to fall back to our PNIs (which to be fair are plenty fast and have good enough latencies). On the other hand others made new ones available recently. Seems there isn’t a universal best strategy the industry is converging on at the moment.

    • A_norny_mousse@feddit.org
      link
      fedilink
      arrow-up
      5
      ·
      14 hours ago

      By creating new protocols that then become new quasi-standards that every system has to integrate because “everybody else does it too”?

      (and yeah this one is a joke - ridiculing something that really exists by exaggerating it)