DaemonHandle's shutdown_tx (oneshot) is replaced with a CancellationToken
shared between the daemon loop and the IPC server. The token is the
single source of truth for "should we shut down" — `DaemonHandle::shutdown`
cancels it, and an IPC `Stop` request also cancels it.
- daemon/state: store the CancellationToken on DaemonHandle and clone it
on Clone (so cached IPC handles can still trigger shutdown).
- daemon/ipc: IpcServer takes a daemon_shutdown token; `Stop` now cancels
it instead of returning Ok and doing nothing. Add IpcClient with
`request`, `status`, and `stop` methods so the CLI can drive a
backgrounded daemon over the Unix socket.
- daemon/lifecycle: thread the token through run_daemon_loop and
run_session, pass a clone to IpcServer::new.
- lib.rs: re-export IpcClient/IpcCommand/IpcResponse so callers don't
have to reach into the daemon module.
- src/vpn_cmds.rs: `sunbeam disconnect` now actually talks to the daemon
via IpcClient::stop, and `sunbeam vpn status` queries IpcClient::status
and prints addresses + peer count + DERP home.
The daemon orchestrates everything: it owns reconnection backoff, the
WireGuard tunnel, the smoltcp engine, the DERP relay loop, the local
TCP proxy, and a Unix-socket IPC server for status queries.
- daemon/state: DaemonStatus state machine + DaemonHandle for shutdown
signaling and live status access
- daemon/ipc: newline-delimited JSON Unix socket server (Status,
Disconnect, Peers requests)
- daemon/lifecycle: VpnDaemon::start spawns run_daemon_loop, which pins
a session future and selects against shutdown_rx so shutdown breaks
out cleanly. run_session brings up the full pipeline:
control client → register → map stream → wg tunnel → engine →
proxy listener → wg encap/decap loop → DERP relay → IPC server.
DERP transport: when the netmap doesn't surface a usable DERP endpoint
(Headscale's embedded relay returns host_name="headscale", port=0),
fall back to deriving host:port from coordination_url. WG packets to
SendDerp peers go via a dedicated derp_out channel; inbound DERP frames
flow back through derp_in into the decap arm, which forwards Packet
results to the engine and Response results back to derp_out for the
handshake exchange.