Integrations
A two-way integrations layer — outbound adapters for webhooks, Slack, Discord, and GitHub auto-PR, plus an inbound HTTP server with signature verification, OAuth bot tokens, and multi-host parity.
Beacon (v4.0.0) introduced the two-way integrations layer; Flare (v5.0.0) closed the inbound loop and added OAuth, multi-host parity, per-IP rate limiting, Slack interactive components, and Discord guild auto-registration. The daemon can deliver notifications to external services through pluggable adapters, and receive signed requests from those same services through a small HTTP server. Both halves share the same secret store and the same per-provider configuration surface.
The page is split into two halves that mirror the data flow:
- Outbound — the daemon dispatches
notify.Busevents to webhook, Slack, Discord, and GitHub auto-PR adapters. - Inbound — the daemon binds an HTTP server, verifies request signatures, and routes slash commands through a transport-agnostic command router.
Outbound
The outbound half lives in internal/daemon/relay. A single Dispatcher subscribes to notify.Bus and fans events out to every registered Adapter. Each adapter owns its own template rendering, retry timing, and circuit breaker.
Architecture
The Adapter interface is the only contract every outbound integration implements. The Dispatcher owns:
- Subscription — one
notify.Busconsumer per running daemon - Per-adapter retry — exponential backoff
[500ms, 2s, 8s] - Circuit breaker — 3 failures inside a 5-minute rolling window opens the breaker; subsequent events skip the adapter until the window expires
- Concurrency — adapters dispatch in parallel; one slow provider never blocks the others
Secrets and OAuth bot tokens
Adapter credentials are stored in the OS keyring through internal/config/keyring.go, with a file-store fallback for hosts without a keyring backend. Secrets are write-only on the gRPC wire — the GUI and TUI can save and replace, but never read existing values back.
For Slack and Discord, Flare adds an OAuth install flow as the preferred path. Slack stores the workspace xoxb-... bot token returned from the OAuth callback and uses it for chat.postMessage, so slash responses can include rich attachments and DM the originator on private failures. Discord stores the bot token and authenticates inbound requests with Authorization: Bot <token>. The Integrations settings UI exposes "Connect Slack" and "Connect Discord" buttons that launch the flow in the user's default browser; on success a Connected as <bot username> pill appears next to the integration. The legacy signing-secret + public-key path stays additive for users mid-cutover.
Webhook adapter
WebhookAdapter POSTs the canonical notify.Notification payload (plus delivery metadata — event id, attempt, timestamp) to a user-supplied URL. Each request carries an X-Watchfire-Signature: sha256=<hex> HMAC over the raw body so receivers can reject forged calls.
Slack adapter
internal/daemon/relay/slack.go renders three Block Kit envelopes through text/template:
| Envelope | Trigger |
|---|---|
TASK_FAILED | A task transitions to done with success: false |
RUN_COMPLETE | A run-all / wildfire batch finishes |
WEEKLY_DIGEST | The weekly digest fires |
Each envelope is a fixed sequence of header / section / context / actions blocks. Project color is mapped to a :large_<color>_square: shortcode in slack_color.go so the project dot survives Slack's theming.
The TASK_FAILED template carries three action buttons — Retry, Cancel, and View in Watchfire — that round-trip through the inbound POST /echo/slack/interactivity endpoint (see Slack interactivity). Retry reruns the named task, Cancel opens a modal that captures a reason into task.failure_reason, and View in Watchfire deep-links into the GUI.
Discord adapter
internal/daemon/relay/discord.go mirrors the same three envelopes as rich embeds, tinted by the project color. Two shared text/template helpers — hexToInt (for the embed color int) and rfc3339 (for the timestamp footer) — are registered once and reused. A defensive 4000-rune description trim with a single-WARN log on overflow protects against pathological payloads (Discord rejects embeds longer than 4096).
GitHub auto-PR
GitHub auto-PR is opt-in per project, gated on a single key in project.yaml:
github:
auto_pr:
enabled: true
Prerequisites: gh on PATH and gh auth status returning 0. With both satisfied, the end-of-task lifecycle in internal/daemon/git/pr.go::OpenPR runs:
gh auth status— gate, so silent fallback is logged once per project lifetime- Parse
<owner>/<repo>from the project's git remote git push --force-with-leaseto publish thewatchfire/<n>branch- Render the PR body via
pr_body.md.tmpl gh api -X POST /repos/:owner/:repo/pullsto open the PR
Sentinel errors distinguish silent fallback (one WARN per project lifetime when prerequisites are missing) from per-attempt failures (logged at WARN with retry on next task).
Reliability primitives at a glance
| Primitive | Where | Behaviour |
|---|---|---|
| Retry | Per adapter | [500ms, 2s, 8s] backoff |
| Circuit breaker | Per adapter | 3 failures / 5-minute window opens the breaker |
| Secrets | internal/config/keyring.go | OS keyring, file-store fallback |
Inbound
The inbound half lives in internal/daemon/echo. It binds a small HTTP server, verifies request signatures with constant-time HMAC or Ed25519, dedupes replays through an in-process LRU cache, applies a per-IP rate limit, and dispatches slash commands through a transport-agnostic router. Concrete handlers exist for GitHub (pull_request.closed PR-merge), GitLab (Merge Request Hook), Bitbucket (pullrequest:fulfilled), Slack slash commands and interactivity, and Discord interactions.
HTTP server framework
internal/daemon/echo/server.go wraps an http.Server with the small set of guarantees every provider handler relies on:
- Bind address —
ListenAddrfrom the inbound config; default127.0.0.1:8765 - Graceful shutdown — 5-second drain on stop so in-flight requests finish
- Body cap — 1 MiB per request via a global middleware; oversized bodies return
413 - Panic recovery — panicking handlers return
500instead of crashing the daemon - Health endpoint —
/echo/healthis unauthenticated and returns200 OKfor liveness probes - Bind failure — logged at ERROR; the daemon keeps running so a misconfigured port never wedges the rest of the system
Provider handlers register themselves through a single plug-in API:
RegisterProvider(method, path, handler)
Concrete handlers return 503 until their per-provider secret has been configured. An empty InboundConfig (no secrets, no ListenAddr) means no listener — the daemon never opens a port until at least one provider is wired.
Per-IP rate limiting
Flare adds a per-IP token bucket via golang.org/x/time/rate in front of every /echo/* route. The default budget is 30 requests per minute per IP, configurable through models.InboundConfig.RateLimitPerMin (0 disables the limiter entirely). Idempotent re-deliveries that hit the LRU cache do not count against the bucket, so retried Slack / Discord deliveries during a network blip aren't penalised. When a request is rate-limited the server returns 429, and the daemon logs a single WARN per IP per minute to avoid log floods under sustained traffic.
Signature verification
internal/daemon/echo/verify.go exposes three constant-time verifiers, one per upstream:
| Function | Algorithm | Signed payload |
|---|---|---|
VerifyGitHub | HMAC-SHA256 against sha256=<hex> header | Raw request body |
VerifySlack | HMAC-SHA256 with 5-minute drift window | v0:<timestamp>:<body> |
VerifyDiscord | Ed25519 with 5-minute drift window | timestamp || body |
All three use constant-time comparisons so signature checks don't leak timing. Both Slack and Discord enforce a 5-minute timestamp drift to bound replay windows.
Idempotency cache
internal/daemon/echo/idempotency.go is a small LRU+TTL cache that drops duplicate deliveries:
- Capacity — 1000 entries
- TTL — 24 hours
- Storage —
container/listfor LRU ordering,sync.Mutexfor concurrent access - API —
Seen(key)returns whether the key was already in the cache, and refreshes its TTL on every hit so chatty deliveries don't churn
The cache is process-local — a daemon restart drops state, which is acceptable for a 24-hour replay window.
Command router
Slash commands are dispatched through a single transport-agnostic function:
Route(ctx, cmd, subcmd, rest, CommandContext) CommandResponse
Three commands are wired today:
| Command | Effect |
|---|---|
status | Returns the current per-project status block |
retry <task> | Re-runs the named task |
cancel <task> | Cancels the named task |
The response shape is intentionally transport-neutral:
CommandResponse{
text string
blocks []Block
ephemeral bool
in_channel bool
}
Each transport handler renders the same CommandResponse into its native envelope (Discord components, Slack Block Kit, etc.). New commands plug into commands.Route once and surface everywhere a transport is wired.
Discord interactions endpoint
internal/daemon/echo/handler_discord.go exposes POST /echo/discord/interactions with the full inbound pipeline:
- Ed25519 verification through
VerifyDiscord - Replay window — reject requests outside the 5-minute drift window
- Idempotency — drop duplicates already in the LRU+TTL cache
- Dispatch —
PINGreturnsPONG;APPLICATION_COMMANDcallscommands.Route - Render —
discord_render.go::RenderInteractionturns theCommandResponseinto a Discord interaction response
Slash-command registration is automatic on Flare: the daemon enumerates every guild the bot is in at startup and POSTs the three slash-command schemas through internal/cli/integrations_discord.go::registerForGuild, then subscribes to the GUILD_CREATE Gateway event so a freshly-added guild gets commands within ~30 seconds. The Settings UI lists every guild with a ✓ / ✗ registration pill. Discord's commands API is upsert-style, so re-running is safe; the manual CLI fallback watchfire integrations register-discord <guild_id> stays available.
Slack slash-command HTTP transport
internal/daemon/echo/handler_slack_commands.go translates the URL-encoded slash-command form body (command, text, team_id, channel_id, user_id, trigger_id) into a call against the shared commands.Route(...) router and renders CommandResponse as Slack response JSON ({response_type: "in_channel" | "ephemeral", text, blocks}). With Flare, /watchfire status / retry / cancel works in Slack at parity with Discord. Slack v0 HMAC verification, the 5-minute drift window, and the LRU+TTL idempotency cache all share the inbound pipeline.
Slack interactivity
POST /echo/slack/interactivity handles the block_actions and view_submission payloads emitted when users click the Retry, Cancel, or View in Watchfire buttons on the outbound TASK_FAILED envelope. Verification, drift window, and idempotency match the slash-commands endpoint. Button presses route through commands.Route so a Retry click is the exact equivalent of /watchfire retry. Cancel opens a Slack modal that asks "Why are you cancelling?"; the supplied reason lands in task.failure_reason.
GitHub PR-merge handler
internal/daemon/echo/handler_github.go registered at POST /echo/github?project=<id> parses X-GitHub-Event / X-Hub-Signature-256 / X-GitHub-Delivery, resolves the per-project HMAC secret from the keyring, runs verify.VerifyGitHub, deduplicates against the idempotency cache, narrows on event == "pull_request" && action == "closed" && pull_request.merged == true, then matches the Watchfire task by pull_request.head.ref == watchfire/<n> and calls task.MarkDoneIfNotAlready. A Pulse RUN_COMPLETE notification fires titled <project> — PR #<number> merged. With Flare, the GitHub auto-PR loop closes itself.
GitHub Enterprise / GitLab / Bitbucket parity
Per-project github_host on models.InboundConfig lets the existing GitHub HMAC-SHA256 verifier target arbitrary GitHub Enterprise hostnames (the same field is used by the outbound auto-PR path). New internal/daemon/echo/handler_gitlab.go verifies X-Gitlab-Token and narrows on Merge Request Hook events with action: merge. New internal/daemon/echo/handler_bitbucket.go verifies X-Hub-Signature (HMAC-SHA256) and narrows on pullrequest:fulfilled events. The Settings UI surfaces a "Git host" picker on inbound config.
Inbound configuration
The inbound surface is driven by a single InboundConfig with global fields and one block per provider:
| Field | Purpose |
|---|---|
ListenAddr | Bind address; default 127.0.0.1:8765 |
PublicURL | Used to construct the per-provider URLs the GUI offers as Copy buttons |
RateLimitPerMin | Per-IP rate limit across /echo/*; default 30, 0 disables |
github_host | Per-project GitHub host (default github.com); also used by outbound auto-PR |
| Per-provider secrets / OAuth tokens | One write-only credential per upstream; empty disables that handler |
An empty InboundConfig is the signal to skip the listener entirely — the daemon never binds a port until at least one provider is configured.
Migration
- Empty
InboundConfig= no listener. The daemon does not bind a port until you configure at least one inbound provider. - Per-provider 503 — concrete handlers return
503 Service Unavailableuntil their secret is set. The/echo/healthendpoint is always available regardless of provider state. - OAuth is opt-in — existing signing-secret + public-key configs continue to work after upgrading to Flare. Use the new "Connect Slack" / "Connect Discord" buttons to migrate.
- Rate limiting —
RateLimitPerMindefaults to 30. Set to0to disable, or raise it for high-volume inbound traffic. - Multi-host inbound — leave
github_hostempty for github.com; set per-project for GitHub Enterprise. GitLab and Bitbucket handlers are inactive until their per-project secret is configured.
For the surfaces that drive integrations from the UI side, see the Integrations panel in the GUI Settings doc and the Integrations overlay in the CLI / TUI doc. For CLI commands, see watchfire integrations.
See also
- Security — how inbound signature verification, outbound signing, and secret storage fit into the broader threat model.