← All Writing
March 8, 202610 min read

How I Built a Full Status Alert System

Website uptime, Supabase health, API monitoring, Garmin failure alerts, Fail2ban reports, personal changelog, and a weekly summary email — all automated from a headless home server

YieldA full observability stack for a personal server — every meaningful system sends an alert or weekly report automatically, with zero manual checking
DifficultyIntermediate (bash scripting, cron scheduling, Resend API, Claude CLI integration)
Total Cook Time~4 hours spread across several sessions — each alert is 20–45 minutes individually

Ingredients

The Problem: A Server You Can’t See

A headless server is quiet by design. That’s the point — it runs in the background, lid closed, in another room. But quiet also means invisible. If joseandgoose.com goes down at 2am, I won’t know until someone tells me. If Supabase has an outage and my contact form stops saving submissions, I’ll find out when I check the database manually (which I never do). If the Garmin recap cron job silently fails, I get no email and no clue.

The solution isn’t to check things manually — that defeats the purpose of automation. The solution is to make the system tell you when something is wrong. Every important job should either succeed quietly or fail loudly. Here’s the full stack:

Alert 1: Website Uptime Monitor

Every 5 minutes

The most basic question: is joseandgoose.com responding? A curl request every 5 minutes, checked against an expected HTTP status code. If it returns anything other than 200, send an alert.

🔧 Developer section: Uptime monitor script

In the first month of running: two downtime events. One was a Vercel deployment that briefly returned a 503 during a cold start. One was my own fault — a broken build that I caught within 5 minutes because the alert email beat me to it.

Don’t over-alert

Add a cooldown: only alert once per hour per incident. If the site is still down an hour later, send another. One alert per incident is actionable; a flood of them is just noise.

Alert 2: Garmin Recap Failure Check

Every morning at 8am

The Garmin recap runs at 7am. By 8am, a recap file should exist for today. If it doesn’t, something broke overnight — and I should know before I’ve been waiting all day for a recap email that’s never coming.

🔧 Developer section: Garmin failure check script

This is a dead-man’s switch pattern: instead of the job alerting on success, a second job alerts on missing success. It catches silent failures — crashes, auth errors, network timeouts — that don’t generate their own error output.

Alert 3: Nightly Fail2ban Ban Report

Every evening at 7pm

Fail2ban bans IPs automatically, but I wanted a daily snapshot: how many new IPs got banned today? Is that number trending up (could indicate a targeted scan) or holding steady (normal background noise)?

🔧 Developer section: Fail2ban report script

The delta matters more than the total. A large cumulative count after weeks of running is expected. An unusual spike in a single day is worth investigating.

Alert 4: Supabase Health + GitHub Activity (Sunday)

Every Sunday at 8am

Two separate checks that share a Sunday timeslot because they’re both weekly sanity checks rather than urgent alerts:

🔧 Developer section: Supabase health check

🔧 Developer section: GitHub activity report

Alert 5: Personal Server Changelog (Sunday)

Sunday 7am — Claude-generated

Every Sunday morning, Claude writes a short narrative of what the server did that week. It’s not a metrics dump — it’s a 3–5 sentence changelog in plain English, like a standup from the server to me.

🔧 Developer section: Claude-generated changelog

A recent example output from Claude:

Weekly Changelog — Home Server
Solid week. All 7 Garmin recaps generated on schedule — no missed mornings.
Fail2ban blocked 23 new IPs, all automated bots, nothing unusual.
Site was up 100% — 2,016 uptime checks passed, zero alerts sent.
3 AI batch jobs processed from the inbox queue.
Disk at 34% used, 87GB free. No action needed.

Claude writes the server’s weekly standup. No log-diving required.

Alert 6: Weekly Status Report Email

Sunday 9am — the full picture

All the pieces come together in one Sunday email: changelog, Supabase health, GitHub activity, Fail2ban weekly total, disk space, and a resource summary. It’s the one email that tells me everything about the past week without opening a terminal.

🔧 Developer section: Weekly report assembly

The Meta-Alert: healthchecks.io

There’s one failure mode none of the above covers: what if the server itself goes down? If the machine crashes, no cron runs, no emails send, and I notice nothing until I happen to SSH in. The solution is a dead-man’s switch hosted externally.

🔧 Developer section: healthchecks.io heartbeat

The monitoring gap

Everything else I built monitors from the server. healthchecks.io monitorsthe server — from outside. It’s the only alert that can fire when the machine itself is unreachable. Without it, a power outage or crash is invisible until you notice the silence.

The Full Cron Schedule

Everything running on a single crontab:

🔧 Developer section: Complete cron schedule

Final Output

The server now manages itself. I never log in to check if things are running. I get emails when something is wrong, and I get a weekly report that tells me everything is fine. The no-email state is the good state.

What went fast

What needed patience

The hardest part of a home server isn’t setting it up — it’s knowing what’s happening on it without babysitting it. This alert stack is the answer. Every meaningful event surfaces as an email. Everything else is silence, which is good news.

← Back to all writing