I have two machines running docker. A (powerful) and B (tiny vps).
All my services are hosted at home on machine A. All dns records point to A. I want to point them to B and implement split horizon dns in my local network to still directly access A. Ideally A is no longer reachable from outside without going over B.
How can I forward requests on machine B to A over a tunnel like wireguard without loosing the source ip addresses?
I tried to get this working by creating two wireguard containers. I think I only need iptable rules on the WG container A but I am not sure. I am a bit confused about the iptable rules needed to get wireguard to properly forward the request through the tunnel.
What are your solutions for such a setup? Is there a better way to do this? I would also be glad for some keywords/existing solutions.
Additional info:
- Ideally I would like to not leave docker.
- Split horizon dns is no problem.
- I have a static ipv6 and ipv4 on both machines.
- I also have spare ipv6 subnets that I can use for intermediate routing.
- I would like to avoid cloudflare.
You could try using ssh reverse proxy and proxy the port to the vps.
Another way is to setup wireguard on the vps, connect the powerfull machine to it and keep it always connected there. ( This isn’t really a good options since then all traffic is moved thrkught the vps )
There is also grok I think that’s the name.
In general I think ssh reverse port proxy would be a decent way and then you can use a reverse proxy on the vps like nginx or caddy ( you need one that works on the host network )
I was hoping for a solution which allows for other protocols not just https and http. I will take a closer look at grok.
A ssh tunnel could work. I didn’t think of that. I will have to test how this interacts with docker but I think it must be setup directly on the host. I don’t think the ssh tunnel limitation applies since the service will still be reachable from As local network. Speed might be a concern but I will have to test.
Tailscale maybe? They have a mode where you can configure a site to site links, you could route the docker networks. https://tailscale.com/kb/1019/subnets
I have heard of it seems like a good option. If you use it please tell me if it can fullfil my requirements.
Mhh I didn’t know headscale exists. Tailscale being proprietary was the main thing keeping me from using it.
I haven’t used Tailscale myself, but it seems like it’s basically just a Wireguard frontend.
Although correct, there feature set is amazing and expanding. Tailscale is my number one tool of choice, these days, it’s so simple and so handy.
“Technically correct” is the best form of correct. Though having tried setting up Wireguard in the past, having a dead-simple solution like Tailscale might be worth trying it out, especially with the 100 device free tier
You can do this with a site-to-site wireguard VPN. You will need to set up the proper routing rules on each termination. On the Internet facing side you will want to do DNAT (modifies destination, keeps source) to redirect the incoming traffic to your non- internet facing side through the tunnel. Then on the non- internet facing you need to set up Routing rules to ensure all traffic headed for public IPs is traversing the tunnel. Then back on the Internet facing side you need to SNAT (modify source, keep destination) the traffic coming through the tunnel headed for the Internet. Hopefully this helps. People saying this goes against standards are not really correct as this is a great application for NAT.
Thank you very much. I knew I needed a few nat rules but was unsure which exactly. I think I will be able to figure it out now. 😃
I’ve been looking at setting up something similar and plan on following this guide, and putting Traefik in front of it as a tcp reverse proxy .
Wow this may have been the missing piece to get my setup working. If I manage to do it will send you an URL to a git repo.
Awesome! I’m glad I could help. Good luck! I’ve been spending quite a bit of time figuring out how to get this to run alongside other services. I think I just need to add an extra iptables rule to ignore port 443 so https requests will go through traefik first.
Looks nice. I think I will build two docker containers with wireguard and iptables. This blog will be a great help.
Check if you can find yggdrasil useful
Or just tailscale(headscale), or wireguard
Thanks. Will checkout Yggdrasil.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT Git Popular version control system, primarily for code HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol NAT Network Address Translation SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
8 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.
[Thread #474 for this sub, first seen 2nd Feb 2024, 07:35] [FAQ] [Full list] [Contact] [Source code]
Keeping the source IP intact means you’ll have troubles routing back the traffic through host B.
Basically host A won’t be able to access the internet without going through B, which could not be what you want.
Here’s how it works:
On host A:
- add a /32 route to host B public IP through your local ISP gateway (eg. 192.168.1.1)
- setup a wireguard tunnel between A and B
- host A: 172.17.0.1/30
- host B: 172.17.0.2/30
- add a default route to host B wireguard IP
On host B:
- setup wireguard (same config)
- add PAT rules to the firewall so to DNAT incoming requests on the ports you need to 172.17.0.1
- add an SNAT masquerade rule so all outbound request from 172.17.0.1 are NATed with host B public address.
This should do what you need. However, if I may comment it out, I’d say you should give up on carrying the source IP address down to host A. This setup I described is clunky and can fail in many ways. Also I can see no benefits of doing that besides having “pretty logs” on host A. If you really need good logs, I’d suggest setting up a good reverse proxy on host B and forwarding it’s logs to a collector on host A.
It might help if you described what you’re actually trying to forward … and why the source IP matters.
ZeroTier would be my recommendation personally (it does what tailscale does but it’s been doing it longer and you can use whatever IP ranges you need vs some public IPv4 address space TailScale pools from).
Preserve the source IP you say, why?
The thing is that if you could (without circumventing the standards) do so then that implies that IP isn’t actually a unique identifier, which is needs to be. It would also mean circumventing whitelists / blacklists would be trivial (it’s not hard by any means but has some specific requirements).
The correct way to do this, even if there might be some hack you could do to get the actual source IP through, is to put the source in a ‘X-Forwarded-For’ header.
As for ready solutions I use NetBird which has open source clients for Windows, Linux and Android that I use without issues and it’s perfectly self-hostable and easy to integrate with your own IDP.
The reason I want to preserve the IP is mostly for fancy graphana plots and tracability.
X-Forwarded-For
is great but only works for http/https. Also I would like to keep the https termination on machine B.I will check out netbird.
You want to group by IP in grafana and not using http traffic? Why not group on data or metadata in what is being sent which is the common approach?
Can you elaborate on the IP would not be unique part?
If you can fool the Internet that traffic coming from the VPS has the source IP of your home machine what stops you from assuming another IP to bypass an IP whitelist?
Also if you expect return communication, that would go to your VPS which has faked the IP of your home machine. That technique would be very powerful to create man in the middle attacks, i.e. intercepting traffic intended for someone else and manipulating it without leaving a trace.
IP, by virtue of how the protocol works, needs to be a unique identifier for a machine. There are techniques, like CGNAT, that allows multiple machines to share an IP, but really it works (in simplified terms) like a proxy and thus breaks the direct connection and limits you to specific ports. It’s also added on top of the IP protocol and requires specific things and either way it’s the endpoint, in your case the VPS, which will be the presenting IP.
Each time you send a packet over the internet, several routers handle this packet without touching the source and destination IP addresses.
There is nothing stopping him from configuring the VPS in a way that forwards packets from the home server, rewriting the destination IP (and optionally destination port as well) but leaving the source IP intact.
For outgoing packets, the VPS should rewrite the source (homeserver) IP and port and leave the destination intact.
With iptables, this is done with
MASQUERADE
rules.This is pretty much how any NAT, including ones behind home routers, work.
You then configure the homeserver to use the VPS as a gateway over wireguard, which should achieve the desired result.
Yeah, I was just confused about the direction/flow he was asking for. He clarified and his use case is fully solvable. Just not something I’ve personally dabbled in since he wants it for non http traffic.
That’s not what I want accomplish. The clients connecting to machine B should not know that their traffic was handled by machine A. I will use DNATs to accomplish my goal. It is possible because tailscale can do exactly that. Thank you for your input though.
Maybe I am wrong we will see soon. 🙃
Well thats just a normal reverse proxy then. In my setup I use Caddy to send traffic through the NetBird managed wireguard tunnel to my home machine that runs Jellyfin but for any outside observer it look like it’s my VPS that is serving Jellyfin.
Jes exactly but without being http/https only and without decrypting the traffic on the vps.
That’s why the forwarded for header won’t work. It’s one layer below.
Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.
At the homelab (
A
in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B
in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it innetwork: host
mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com
), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.The original source IP is available to your local docker containers by making use of the
X-Forwarded-For
header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:{ servers { trusted_proxies static 192.168.144.1/24 100.111.166.92 } }
replacing the first IP with the gateway in the docker network, and the second IP with the “virtual” IP of server A inside the tailnet. Your containers, if they’re written properly, should automatically read this value and display the real source IP in their logs.
Let me know if you have any further questions.
deleted by creator