Skip to content

Commit b11f792

Browse files
committed
til: tailscale vs ipv6 workaround
1 parent 430082d commit b11f792

File tree

1 file changed

+122
-0
lines changed

1 file changed

+122
-0
lines changed
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
---
2+
title: "How to work around tailscale breaking IPv6 on the host"
3+
date: 2026-01-23
4+
tags:
5+
- linux
6+
- tailscale
7+
- administration
8+
- networking
9+
- routing
10+
- ipv6
11+
---
12+
13+
Yesterday I ran into an incredibly weird issue. I installed the [Tailscale](https://tailscale.com/) client on two of my virtual servers hosted in the Hetzner Cloud (running Ubuntu) and suddenly the websites they offered stopped working. I suspected Tailscale and indeed, `tailscale down` immediately restored functionality.
14+
15+
The websites in question are actually on GitHub Pages and my servers are just acting as reverse proxy to resolve domain and TLS, and a look into the web server's `error.log` showed that the issue in serving was that the server could no longer reach its upstream at `github.io` when Tailscale was active. It wasn't a general loss of external connectivity though - IPv4 addresses still worked great, the webserver however was trying to connect to the upstream via IPv6 and this is where things failed.
16+
17+
I did some quick tests, pinging Google's DNS on both IPv4 (`8.8.8.8`) and IPv6 (`2001:4860:4860::8888`) with Tailscale running, and that showed that while Tailscale was running, IPv6 connectivity just broke down completely while IPv4 continued to work:
18+
19+
```
20+
$ sudo tailscale up && ping -c 3 8.8.8.8 && sudo tailscale down
21+
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
22+
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=3.98 ms
23+
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=3.69 ms
24+
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=3.63 ms
25+
26+
--- 8.8.8.8 ping statistics ---
27+
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
28+
rtt min/avg/max/mdev = 3.631/3.771/3.986/0.162 ms
29+
30+
$ sudo tailscale up && ping -c 3 2001:4860:4860::8888 && sudo tailscale down
31+
PING 2001:4860:4860::8888(2001:4860:4860::8888) 56 data bytes
32+
33+
--- 2001:4860:4860::8888 ping statistics ---
34+
3 packets transmitted, 0 received, 100% packet loss, time 2051ms
35+
```
36+
37+
As a next step, I checked the route that would be taken for resolving `2001:4860:4860::8888` with and without Tailscale:
38+
39+
```
40+
$ ip route get 2001:4860:4860::8888
41+
2001:4860:4860::8888 from :: via fe80::1 dev eth0 src aaaa:bbbb:cccc:dddd::1 metric 1024 pref medium
42+
43+
$ sudo tailscale up && ip route get 2001:4860:4860::8888 && sudo tailscale down
44+
2001:4860:4860::8888 from :: via fe80::1 dev eth0 src fd7a:115c:a1e0::aaaa:bbbb metric 1024 pref medium
45+
```
46+
47+
So, the issue was that for some reason, while Tailscale was running the system decided to use Tailscale's internal IPv6 `fd7a:115c:a1e0::aaaa:bbbb` set on `tailscale0` as the source IP but send the packet through the default route and `eth0`, and that didn't work. It basically hijacked the IPv6 traffic, even when the Tailnet wasn't even involved.
48+
49+
The routes looked fine to me and some buddies I asked also didn't spot anything amiss:
50+
51+
```
52+
$ ip -6 rule show
53+
0: from all lookup local
54+
5210: from all fwmark 0x80000/0xff0000 lookup main
55+
5230: from all fwmark 0x80000/0xff0000 lookup default
56+
5250: from all fwmark 0x80000/0xff0000 unreachable
57+
5270: from all lookup 52
58+
32766: from all lookup main
59+
60+
$ ip -6 route show table local
61+
local ::1 dev lo proto kernel metric 0 pref medium
62+
local aaaa:bbbb:cccc:dddd::1 dev eth0 proto kernel metric 0 pref medium
63+
local fd7a:115c:a1e0::aaaa:bbbb dev tailscale0 proto kernel metric 0 pref medium
64+
local fe80::5fdc:58a7:1a93:da5a dev tailscale0 proto kernel metric 0 pref medium
65+
local fe80::9400:ff:fe0d:61a1 dev eth0 proto kernel metric 0 pref medium
66+
multicast ff00::/8 dev eth0 proto kernel metric 256 pref medium
67+
multicast ff00::/8 dev tailscale0 proto kernel metric 256 pref medium
68+
69+
$ ip -6 route show table 52
70+
fd7a:115c:a1e0::53 dev tailscale0 metric 1024 pref medium
71+
fd7a:115c:a1e0::/48 dev tailscale0 metric 1024 pref medium
72+
73+
$ ip -6 route show table main
74+
aaaa:bbbb:cccc:dddd::/64 dev eth0 proto kernel metric 256 pref medium
75+
fd7a:115c:a1e0::aaaa:bbbb dev tailscale0 proto kernel metric 256 pref medium
76+
fe80::/64 dev eth0 proto kernel metric 256 pref medium
77+
default via fe80::1 dev eth0 metric 1024 pref medium
78+
```
79+
80+
From what I could see, firing up Tailscale would add the following new rules to the routing table:
81+
82+
```
83+
$ ip route show table all > ts-off.txt
84+
$ sudo tailscale up && ip route show table all > ts-on.txt && sudo tailscale down
85+
$ diff ts-off.txt ts-on.txt
86+
0a1,2
87+
> 100.a.b.c dev tailscale0 table 52
88+
> 100.100.100.100 dev tailscale0 table 52
89+
3a6
90+
> local 100.x.y.z dev tailscale0 table local proto kernel scope host src 100.x.y.z
91+
12a16,17
92+
> fd7a:115c:a1e0::53 dev tailscale0 table 52 metric 1024 pref medium
93+
> fd7a:115c:a1e0::/48 dev tailscale0 table 52 metric 1024 pref medium
94+
13a19
95+
> fd7a:115c:a1e0::aaaa:bbbb dev tailscale0 proto kernel metric 256 pref medium
96+
17a24
97+
> local fd7a:115c:a1e0::aaaa:bbbb dev tailscale0 table local proto kernel metric 0 pref medium
98+
```
99+
100+
I still have not figured out what is actually going on there, and a reproduction on a fresh server so far also wasn't successful. The problem is that packets are being sent with the wrong source IPv6, but that's just a symptom of the underlying cause.
101+
102+
Thankfully, my buddy Jub came up with the workaround to change the default route to use a fixed IPv6 source address - the correct one - and that solved the issue (by fixing the symptom):
103+
104+
```
105+
ip -6 route replace default via fe80::1 dev eth0 src aaaa:bbbb:cccc:dddd::1
106+
```
107+
108+
I put that on a new `post-up` line into the network setup in `/etc/network/interface.d/50-cloud-init.cfg`
109+
110+
```
111+
auto eth0:0
112+
iface eth0:0 inet6 static
113+
address aaaa:bbbb:cccc:dddd::/64
114+
gateway fe80::1
115+
post-up route add -net :: netmask 0 gw fe80::1%eth0 || true
116+
post-up ip -6 route replace default via fe80::1 dev eth0 src aaaa:bbbb:cccc:dddd::1 || true
117+
pre-down route del -net :: netmask 0 gw fe80::1%eth0 || true
118+
```
119+
120+
A reboot confirmed that this works as a **workaround**.
121+
122+
But as I mentioned, I still can't make any sense of the underlying issue. I found [one open bug report in Tailscale's bug tracker](https://github.com/tailscale/tailscale/issues/17936) that sounded familiar, but it didn't fully match my situation. I also have to admit that my administration skills kinda get a bit fuzzy when it comes to full blown route analysis & debugging - so should you have any ideas at all what is actually causing this behaviour here, please get in touch [on Mastodon](https://chaos.social/@foosel) - I'd love to see this mystery solved, but am out of my depth here 😅

0 commit comments

Comments
 (0)