← All Writing
March 30, 20268 min read

How I Built a Personal API Server to Control Everything Remotely

An Express API on the Alienware that lets me trigger jobs, check system health, and pull reports — secured with API key auth, systemd, and the same port knocking that protects SSH

YieldA personal API running on a non-standard port — remote control for the server’s jobs and system reports, accessible from any device on the home network or via port knocking from outside
DifficultyIntermediate (Express.js, systemd service, API key auth, UFW firewall rules, port knocking)
Total Cook Time~4 hours across 3 sessions, with security hardening added over the following week

Ingredients

Where This Fits

This is the fourth post in the Alienware server series. The first three covered the foundation: turning the laptop into a headless server, locking it down with layered security, and building a full alert system so I’d know when something breaks. Those posts were about making the server exist, making it safe, and making it observable.

This one is about making it controllable — without opening a terminal every time.

The Problem: SSH for Everything

SSH is great for full terminal access. But most of the time I don’t need a terminal. I need to trigger a specific job, check if the server is healthy, or pull a quick report. Opening a terminal, connecting via SSH, remembering the right command, and running it is overkill for “run the market briefing in test mode.”

What I wanted was an API — something I could hit from a phone, a script, or a shortcut. One HTTP request to run a job. One GET to check health. No terminal required.

What the API Does

Six endpoints, each handling one concern:

🔧 Developer section: Express setup

Security: Three Layers Deep

An API running on a home server is a target if it’s exposed. The security model uses three independent layers, each of which would need to be bypassed separately:

Layer 1: Firewall (UFW)

The API port is only open to the local network subnet. Any request from outside the home network is dropped at the firewall level before it reaches Express. This means the API is invisible to the public internet by default.

Layer 2: API Key Auth

Even on the local network, every request must include the correct API key in thex-api-key header. The key is a long random string stored in a file readable only by the service user. No key, no access — even from a device on the same WiFi.

Layer 3: Port Knocking (External Access)

When I’m outside the home network and need API access, port knocking opens the door temporarily. A specific sequence of packets sent to three ports in the right order tells the firewall to open a non-standard port for my IP. A reverse sequence closes it. The knock script runs from my laptop or phone before any API request.

Defense in depth

Any one of these layers would be reasonable security on its own. All three together means an attacker would need to be on my LAN, know the API key, and know the port knock sequence. That’s not impossible, but it’s not worth anyone’s time for a personal API on a home server.

Keeping It Running: systemd

The API needs to survive reboots, crashes, and the monthly scheduled restart. That means running it as a systemd service, not a background process started with nohupor screen.

🔧 Developer section: systemd service

systemd is the right tool here because it handles the two hard problems: starting on boot and restarting on crash. A manually started process with node index.js &dies on reboot and stays dead on crash. The systemd service file is 12 lines and solves both problems permanently.

How I Actually Use It

The most common use case: testing the market briefing. Instead of SSH-ing into the server, navigating to the right directory, sourcing the env file, and running the script with the test flag, I send one curl:

Terminal
$ curl -X POST http://[server-ip]:[port]/run/market-daily-test \
  -H "x-api-key: $API_KEY"

{"status":"started","job":"market-daily-test"}

One line to trigger a job. The API spawns the script and returns immediately.

The health endpoint is even simpler — a GET request with no auth that returns server vitals. I have a phone shortcut that hits /health and shows uptime, memory, and disk in a notification. One tap to check the server without opening a terminal.

The task endpoint lets me queue batch jobs from anywhere. Drop a request via POST, the every-minute cron worker picks it up, processes it, and the result appears in/tasks/status when it’s done. An asynchronous processing queue built with nothing but a file system and cron.

Final Output

The API has been running as a systemd service since late March. It survives reboots, restarts on crashes, and responds to health checks within milliseconds. The entire codebase is one JavaScript file and one systemd service file.

What went fast

What needed patience

The API is the simplest project on the Alienware. Six endpoints, one file, no database. But it changed how I interact with everything else. Instead of SSH for every small task, I send an HTTP request from the couch while Goose takes up the other half of it. The server went from something I log into to something I talk to.

← Back to all writing