`sunbeam connect` now fork-execs itself with a hidden `__vpn-daemon`
subcommand instead of running the daemon in-process. The user-facing
command spawns the child detached (stdio → log file, setsid for no
controlling TTY), polls the IPC socket until the daemon reaches
Running, prints a one-line status, and exits. The user gets back to
their shell immediately.
- src/cli.rs: `Connect { foreground }` instead of unit. Add hidden
`__vpn-daemon` Verb that the spawned child runs.
- src/vpn_cmds.rs: split into spawn_background_daemon (default path)
and run_daemon_foreground (used by both `connect --foreground` and
`__vpn-daemon`). Detached child uses pre_exec(setsid) and inherits
--context from the parent so it resolves the same VPN config.
Refuses to start if a daemon is already running on the control
socket; cleans up stale socket files. Switches the proxy bind from
16443 (sienna's existing SSH tunnel uses it) to 16579.
- sunbeam-net/src/daemon/lifecycle: add a SocketGuard RAII type so the
IPC control socket is unlinked when the daemon exits, regardless of
shutdown path. Otherwise `vpn status` after a clean disconnect would
see a stale socket and report an error.
End-to-end smoke test against the docker stack:
$ sunbeam connect
==> VPN daemon spawned (pid 90072, ...)
Connected (100.64.0.154, fd7a:115c:a1e0::9a) — 2 peers visible
$ sunbeam vpn status
VPN: running
addresses: 100.64.0.154, fd7a:115c:a1e0::9a
peers: 2
derp home: region 0
$ sunbeam disconnect
==> Asking VPN daemon to stop...
Daemon acknowledged shutdown.
$ sunbeam vpn status
VPN: not running
DaemonHandle's shutdown_tx (oneshot) is replaced with a CancellationToken
shared between the daemon loop and the IPC server. The token is the
single source of truth for "should we shut down" — `DaemonHandle::shutdown`
cancels it, and an IPC `Stop` request also cancels it.
- daemon/state: store the CancellationToken on DaemonHandle and clone it
on Clone (so cached IPC handles can still trigger shutdown).
- daemon/ipc: IpcServer takes a daemon_shutdown token; `Stop` now cancels
it instead of returning Ok and doing nothing. Add IpcClient with
`request`, `status`, and `stop` methods so the CLI can drive a
backgrounded daemon over the Unix socket.
- daemon/lifecycle: thread the token through run_daemon_loop and
run_session, pass a clone to IpcServer::new.
- lib.rs: re-export IpcClient/IpcCommand/IpcResponse so callers don't
have to reach into the daemon module.
- src/vpn_cmds.rs: `sunbeam disconnect` now actually talks to the daemon
via IpcClient::stop, and `sunbeam vpn status` queries IpcClient::status
and prints addresses + peer count + DERP home.
- docker-compose.yml: run peer-a and peer-b with TS_USERSPACE=false +
/dev/net/tun device + cap_add. Pin peer-a's WG listen port to 41641
via TS_TAILSCALED_EXTRA_ARGS and publish it to the host so direct
UDP from outside docker has somewhere to land.
- run.sh: use an ephemeral pre-auth key for the test client so
Headscale auto-deletes the test node when its map stream drops
(instead of accumulating hundreds of stale entries that eventually
slow netmap propagation to a crawl). Disable shields-up on both
peers so the kernel firewall doesn't drop inbound tailnet TCP. Tweak
the JSON key extraction to handle pretty-printed output.
- integration.rs: add `test_e2e_tcp_through_tunnel` that brings up
the daemon, dials peer-a's echo server through the proxy, and
asserts the echo body comes back. Currently `#[ignore]`d — the
docker stack runs Headscale over plain HTTP, but Tailscale's client
unconditionally tries TLS to DERP relays ("tls: first record does
not look like a TLS handshake"), so peer-a can never receive
packets we forward via the relay. Unblocking needs either TLS
termination on the docker DERP or running the test inside the same
docker network as peer-a. Test stays in the tree because everything
it tests up to the read timeout is real verified behavior.
A pile of correctness bugs that all stopped real Tailscale peers from
being able to send WireGuard packets back to us. Found while building
out the e2e test against the docker-compose stack.
1. WireGuard static key was wrong (lifecycle.rs)
We were initializing the WgTunnel with `keys.wg_private`, a separate
x25519 key from the one Tailscale advertises in netmaps. Peers know
us by `node_public` and compute mac1 against it; signing handshakes
with a different private key meant every init we sent was silently
dropped. Use `keys.node_private` instead — node_key IS the WG static
key in Tailscale.
2. DERP relay couldn't route packets to us (derp/client.rs)
Our DerpClient was sealing the ClientInfo frame with a fresh
ephemeral NaCl keypair and putting the ephemeral public in the frame
prefix. Tailscale's protocol expects the *long-term* node public key
in the prefix — that's how the relay knows where to forward packets
addressed to our node_key. With the ephemeral key, the relay
accepted the connection but never delivered our peers' responses.
Now seal with the long-term node key.
3. Headscale never persisted our DiscoKey (proto/types.rs, control/*)
The streaming /machine/map handler in Headscale ≥ capVer 68 doesn't
update DiscoKey on the node record — only the "Lite endpoint update"
path does, gated on Stream:false + OmitPeers:true + ReadOnly:false.
Without DiscoKey our nodes appeared in `headscale nodes list` with
`discokey:000…` and never propagated into peer netmaps. Add the
DiscoKey field to RegisterRequest, add OmitPeers/ReadOnly fields to
MapRequest, and call a new `lite_update` between register and the
streaming map. Also add `post_json_no_response` for endpoints that
reply with an empty body.
4. EncapAction is now a struct instead of an enum (wg/tunnel.rs)
Routing was forced to either UDP or DERP. With a peer whose
advertised UDP endpoint is on an unreachable RFC1918 network (e.g.
docker bridge IPs), we'd send via UDP, get nothing, and never fall
back. Send over every available transport — receivers dedupe via
the WireGuard replay window — and let dispatch_encap forward each
populated arm to its respective channel.
5. Drop the dead PacketRouter (wg/router.rs)
Skeleton from an earlier design that never got wired up; it's been
accumulating dead-code warnings.
DERP works for everything but adds relay latency. Add a parallel UDP
transport so peers with reachable endpoints can talk directly:
- wg/tunnel: track each peer's local boringtun index in PeerTunnel and
expose find_peer_by_local_index / find_peer_by_endpoint lookups
- daemon/lifecycle: bind a UdpSocket on 0.0.0.0:0 alongside DERP, run
the recv loop on a clone of an Arc<UdpSocket> so send and recv can
proceed concurrently
- run_wg_loop: new udp_in_rx select arm. For inbound UDP we identify
the source peer by parsing the WireGuard receiver_index out of the
packet header (msg types 2/3/4) and falling back to source-address
matching for type-1 handshake initiations
- dispatch_encap: SendUdp now actually forwards via the UDP channel
UDP failure is non-fatal — DERP can carry traffic alone if the bind
fails or packets are dropped.
Spins up Headscale 0.23 (with embedded DERP) plus two Tailscale peers
in docker compose, generates pre-auth keys, and runs three integration
tests behind the `integration` feature:
- test_register_and_receive_netmap: full TS2021 → register → first
netmap fetch
- test_proxy_listener_accepts: starts the daemon and waits for it to
reach the Running state
- test_daemon_lifecycle: full lifecycle including DERP connect, then
clean shutdown via the DaemonHandle
Run with `sunbeam-net/tests/run.sh` (handles compose up/down + auth
key provisioning) or manually via cargo nextest with the env vars
SUNBEAM_NET_TEST_AUTH_KEY and SUNBEAM_NET_TEST_COORD_URL set.
The daemon orchestrates everything: it owns reconnection backoff, the
WireGuard tunnel, the smoltcp engine, the DERP relay loop, the local
TCP proxy, and a Unix-socket IPC server for status queries.
- daemon/state: DaemonStatus state machine + DaemonHandle for shutdown
signaling and live status access
- daemon/ipc: newline-delimited JSON Unix socket server (Status,
Disconnect, Peers requests)
- daemon/lifecycle: VpnDaemon::start spawns run_daemon_loop, which pins
a session future and selects against shutdown_rx so shutdown breaks
out cleanly. run_session brings up the full pipeline:
control client → register → map stream → wg tunnel → engine →
proxy listener → wg encap/decap loop → DERP relay → IPC server.
DERP transport: when the netmap doesn't surface a usable DERP endpoint
(Headscale's embedded relay returns host_name="headscale", port=0),
fall back to deriving host:port from coordination_url. WG packets to
SendDerp peers go via a dedicated derp_out channel; inbound DERP frames
flow back through derp_in into the decap arm, which forwards Packet
results to the engine and Response results back to derp_out for the
handshake exchange.
- proxy/engine: NetworkEngine that owns the smoltcp VirtualNetwork and
bridges async TCP streams to virtual sockets via a 5ms poll loop.
Each ProxyConnection holds the local TcpStream + smoltcp socket
handle and shuttles data between them with try_read/try_write so the
engine never blocks.
- proxy/tcp: skeleton TcpProxy listener (currently unused; the daemon
inlines its own listener that hands off to the engine via mpsc)
- control/client: TS2021 connection setup — TCP, HTTP CONNECT-style
upgrade to /ts2021, full Noise IK handshake via NoiseStream, then
HTTP/2 client handshake on top via the h2 crate
- control/register: POST /machine/register with pre-auth key, PascalCase
JSON serde matching Tailscale's wire format
- control/netmap: streaming MapStream that reads length-prefixed JSON
messages from POST /machine/map, classifies them into Full/Delta/
PeersChanged/PeersRemoved/KeepAlive, and transparently zstd-decodes
by detecting the 0x28 0xB5 0x2F 0xFD magic (Headscale only compresses
if the client opts in)
- wg/tunnel: per-peer boringtun Tunn management with peer table sync
from netmap (add/remove/update endpoints, allowed_ips, DERP region)
and encapsulate/decapsulate/tick that route to UDP or DERP
- wg/socket: smoltcp Interface backed by an mpsc-channel Device that
bridges sync poll-based smoltcp with async tokio mpsc channels
- wg/router: skeleton PacketRouter (currently unused; reserved for the
unified UDP/DERP ingress path)
DERP is Tailscale's TCP relay protocol for peers that can't establish a
direct UDP path. Add the standalone client:
- derp/framing: 5-byte frame codec (1-byte type + 4-byte BE length)
- derp/client: HTTP /derp upgrade, Tailscale's NaCl SealedBox handshake
(ServerKey → ClientInfo → ServerInfo → NotePreferred), and
send_packet/recv_packet for forwarding WireGuard datagrams
Includes the 8-byte DERP\xf0\x9f\x94\x91 magic prefix in the ServerKey
payload and reads the HTTP upgrade response one byte at a time so the
inline first frame isn't swallowed by a buffered reader.
Tailscale's TS2021 protocol layers HTTP/2 over an encrypted Noise IK
channel reached via HTTP CONNECT-style upgrade. Add the lower half:
- noise/handshake: hand-rolled Noise_IK_25519_ChaChaPoly_BLAKE2s
initiator with HKDF + ChaCha20-Poly1305 (no snow dependency)
- noise/framing: 3-byte frame codec (1-byte type + 2-byte BE length)
- noise/stream: NoiseStream implementing AsyncRead + AsyncWrite over
the framed channel so the h2 crate can sit on top
Add the workspace crate that will host a pure Rust Headscale/Tailscale-
compatible VPN client. This first commit lands the crate skeleton plus
the leaf modules that the rest of the stack builds on:
- error: thiserror Error enum + Result alias
- config: VpnConfig
- keys: Curve25519 node/disco/wg key types with on-disk persistence
- proto/types: PascalCase serde wire types matching Tailscale's JSON