How I Built a Personal API Server to Control Everything Remotely
An Express API on the Alienware that lets me trigger jobs, check system health, and pull reports — secured with API key auth, systemd, and the same port knocking that protects SSH
Ingredients
- Node.js + Express — a lightweight web server framework. Express handles incoming HTTP requests and routes them to the right function (free)
- systemd — Linux’s built-in service manager. It starts programs on boot and automatically restarts them if they crash — like a supervisor that never sleeps (built-in)
- UFW — firewall rules to restrict the API port to LAN-only traffic (built-in)
- Port knocking (knockd) — hides the port from external scanners until a secret knock sequence is sent (already set up)
- Headless Linux server — the always-on Alienware (already set up)
Where This Fits
This is the fourth post in the Alienware server series. The first three covered the foundation: turning the laptop into a headless server, locking it down with layered security, and building a full alert system so I’d know when something breaks. Those posts were about making the server exist, making it safe, and making it observable.
This one is about making it controllable — without opening a terminal every time.
The Problem: SSH for Everything
SSH is great for full terminal access. But most of the time I don’t need a terminal. I need to trigger a specific job, check if the server is healthy, or pull a quick report. Opening a terminal, connecting via SSH, remembering the right command, and running it is overkill for “run the market briefing in test mode.”
What I wanted was an API — something I could hit from a phone, a script, or a shortcut. One HTTP request to run a job. One GET to check health. No terminal required.
What the API Does
Six endpoints, each handling one concern:
GET /health— the only unauthenticated endpoint. Returns server uptime, memory usage, and disk space. Used by external monitoring to confirm the API process is alive.POST /run/:job— triggers a named job (e.g.,/run/market-daily-test,/run/garmin-recap). Each job maps to a script on disk. The API spawns the script as a child process and returns immediately with a job ID.POST /tasks/submit— submits a batch task to the processing queue. Writes a file to the inbox folder that the every-minute cron worker picks up and processes.GET /tasks/status— lists recent batch task results (completed, pending, failed) with their output.GET /reports/disk— returns the latest disk usage snapshot.GET /reports/resources— returns the most recent resource samples (CPU, memory, swap) from the CSV logs.
🔧 Developer section: Express setup
- Single
index.jsfile — Express app with middleware for API key validation - API key stored in a protected file in the home directory (chmod 600, owned by the service user)
- Every request (except
/health) must includex-api-keyheader matching the stored key - Job runner uses
child_process.spawn()with detached mode so the API doesn’t wait for long-running scripts - Response format is JSON for all endpoints
Security: Three Layers Deep
An API running on a home server is a target if it’s exposed. The security model uses three independent layers, each of which would need to be bypassed separately:
Layer 1: Firewall (UFW)
The API port is only open to the local network subnet. Any request from outside the home network is dropped at the firewall level before it reaches Express. This means the API is invisible to the public internet by default.
Layer 2: API Key Auth
Even on the local network, every request must include the correct API key in thex-api-key header. The key is a long random string stored in a file readable only by the service user. No key, no access — even from a device on the same WiFi.
Layer 3: Port Knocking (External Access)
When I’m outside the home network and need API access, port knocking opens the door temporarily. A specific sequence of packets sent to three ports in the right order tells the firewall to open a non-standard port for my IP. A reverse sequence closes it. The knock script runs from my laptop or phone before any API request.
Any one of these layers would be reasonable security on its own. All three together means an attacker would need to be on my LAN, know the API key, and know the port knock sequence. That’s not impossible, but it’s not worth anyone’s time for a personal API on a home server.
Keeping It Running: systemd
The API needs to survive reboots, crashes, and the monthly scheduled restart. That means running it as a systemd service, not a background process started with nohupor screen.
🔧 Developer section: systemd service
- Service file at
/etc/systemd/system/my-api.service Restart=always— if the process crashes, systemd restarts it immediatelyRestartSec=5— 5 second delay between restarts to avoid crash loopsWantedBy=multi-user.target— starts automatically on boot- Logs go to Linux’s built-in log system (journald) — one command to watch them stream in real-time
- Managed with standard commands:
sudo systemctl start|stop|restart|status my-api
systemd is the right tool here because it handles the two hard problems: starting on boot and restarting on crash. A manually started process with node index.js &dies on reboot and stays dead on crash. The systemd service file is 12 lines and solves both problems permanently.
How I Actually Use It
The most common use case: testing the market briefing. Instead of SSH-ing into the server, navigating to the right directory, sourcing the env file, and running the script with the test flag, I send one curl:
One line to trigger a job. The API spawns the script and returns immediately.
The health endpoint is even simpler — a GET request with no auth that returns server vitals. I have a phone shortcut that hits /health and shows uptime, memory, and disk in a notification. One tap to check the server without opening a terminal.
The task endpoint lets me queue batch jobs from anywhere. Drop a request via POST, the every-minute cron worker picks it up, processes it, and the result appears in/tasks/status when it’s done. An asynchronous processing queue built with nothing but a file system and cron.
Final Output
The API has been running as a systemd service since late March. It survives reboots, restarts on crashes, and responds to health checks within milliseconds. The entire codebase is one JavaScript file and one systemd service file.
What went fast
- Express setup — a Node.js HTTP server with 6 routes is about as simple as backend code gets. The first working version with health check and job runner took 45 minutes.
- systemd service — 12-line config file, copy to
/etc/systemd/system/, runsystemctl enable. The API has auto-started on every boot since day one. - UFW rule — one command to allow the API port from the LAN subnet only. Network security in one line.
What needed patience
- Running jobs in the background — the first version of the job runner launched scripts and waited for them to finish before responding. Long-running jobs (like the market briefing) would time out the HTTP request. The fix was launching scripts as detached background processes — the API says “started” and returns immediately while the script keeps running on its own.
- API key file permissions — the key file needs to be readable by the systemd service user but not by other users on the system. Getting the ownership and chmod right (600, owned by the service user) required understanding how systemd runs processes under a specific user context.
- Port knocking for the API port — the existing knockd config only opened the SSH port. Extending it to also open a non-standard port required adding a second rule set. Testing it meant locking myself out twice before getting the sequence and timeout values right.
The API is the simplest project on the Alienware. Six endpoints, one file, no database. But it changed how I interact with everything else. Instead of SSH for every small task, I send an HTTP request from the couch while Goose takes up the other half of it. The server went from something I log into to something I talk to.