📋Table of Contents
- The Dream: A Server That Just Works
- The Foundation: Pick Boring Technology
- Automated Updates: The Foundation You Can't Skip
- Log Management: Because Disks Fill Up
- Monitoring: Because You Need to Know When Things Break
- Backups: Because Everything Eventually Fails
- Automatic Cleanup Tasks
- Security: The Boring Stuff That Matters
- SSL Certificate Renewal
- The Reboot Question
- Documentation: Future You Will Thank Present You
- The Reality Check
The Dream: A Server That Just Works
You know what the best server is? The one you don't have to think about. The one that doesn't wake you up at 2 AM. The one that just quietly does its job, year after year, while you focus on literally anything else.
Sound impossible? It's not. But it requires upfront work and – here's the part people skip – accepting that "unattended" doesn't mean "abandoned." Even the most automated server needs occasional check-ins. The difference is you're checking on your terms, not when things are on fire.
I've been running servers for web hosting clients for years. Some of them I barely touch. They update themselves, clean themselves up, back themselves up, and alert me only when something genuinely needs attention. Want to know how that works? Let me show you what actually matters.
The Foundation: Pick Boring Technology
First rule of long-running servers: use stable, boring Linux distributions. This isn't the place to experiment with the bleeding edge.
Debian stable, Ubuntu LTS, Rocky Linux, AlmaLinux – these are your friends. They're maintained for years, they get security updates reliably, and they don't surprise you with breaking changes every six months.
I run Ubuntu LTS on most of my production servers. It's boring. It works. It has a clear 5-year support lifecycle. That's exactly what you want for something you're not planning to touch constantly [web:77][web:79].
Don't use: Rolling release distros, beta versions, anything marked "unstable." Save that for your laptop where you can break things without anyone caring.
Automated Updates: The Foundation You Can't Skip
Here's the controversial take: automated security updates are essential. Yes, I know some update might theoretically break something. You know what breaks things more often? Unpatched security vulnerabilities getting exploited.
On Debian/Ubuntu, unattended-upgrades
is your solution [web:77][web:86]:
apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades
Configure it to install security updates automatically. Edit /etc/apt/apt.conf.d/50unattended-upgrades
:
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}-security";
"${distro_id}ESMApps:${distro_codename}-apps-security";
};
Unattended-Upgrade::AutoFixInterruptedDpkg "true";
Unattended-Upgrade::MinimalSteps "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "false";
Notice that last line? I don't auto-reboot. Some people do. I prefer to schedule reboots myself during maintenance windows because I'm paranoid and I like to see my servers come back up [web:86][web:89].
For RHEL-based systems, use dnf-automatic
:
dnf install dnf-automatic
systemctl enable --now dnf-automatic.timer
Now your server patches itself. One less thing to worry about.
Log Management: Because Disks Fill Up
You know what kills long-running servers? Full disks. You know what fills disks on servers nobody's watching? Logs.
By default, logrotate
should be configured, but let's make sure it's actually working and aggressive enough:
# Check if logrotate is running
systemctl status logrotate.timer
# Configure aggressive log rotation
cat > /etc/logrotate.d/custom-aggressive <<EOF
/var/log/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 0640 root adm
sharedscripts
}
EOF
This keeps only 7 days of logs, compressed. Adjust to your needs, but don't keep months of logs unless you actually use them [web:79][web:82].
Also, monitor your disk usage. Set up a simple cron job:
#!/bin/bash
# /usr/local/bin/disk-alert.sh
THRESHOLD=80
USAGE=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')
if [ $USAGE -gt $THRESHOLD ]; then
echo "WARNING: Disk usage is ${USAGE}% on $(hostname)" | \
mail -s "Disk Space Alert on $(hostname)" admin@example.com
fi
Run it daily. When your disk hits 80%, you'll know before it hits 100% and causes problems.
Monitoring: Because You Need to Know When Things Break
"Unattended" doesn't mean "unmonitored." It means you're not manually checking everything constantly, but the system tells you when something's wrong.
Bare minimum monitoring you need: server is reachable, critical services are running, disk isn't full, SSL certificates aren't expired [web:79][web:85].
For simple setups, I use a combination of external uptime monitoring (UptimeRobot, Pingdom, whatever) and basic local monitoring with email alerts.
Simple service monitoring script:
#!/bin/bash
# /usr/local/bin/service-monitor.sh
SERVICES="nginx mysql redis-server"
for service in $SERVICES; do
if ! systemctl is-active --quiet $service; then
echo "$service is down on $(hostname)" | \
mail -s "Service Alert: $service down" admin@example.com
systemctl start $service
fi
done
Add to crontab to run every 5 minutes:
*/5 * * * * /usr/local/bin/service-monitor.sh
Not sophisticated, but it works. Service goes down? You get an email and it attempts to restart automatically [web:79][web:82].
Backups: Because Everything Eventually Fails
You know what's worse than a server failing? A server failing when you realize your backups don't work. Or don't exist.
Automate your backups. Test your backups. Seriously.
Simple backup strategy that actually works:
#!/bin/bash
# /usr/local/bin/backup.sh
BACKUP_DIR="/var/backups/daily"
DATE=$(date +%Y%m%d)
RETENTION_DAYS=14
mkdir -p $BACKUP_DIR
# Backup databases
mysqldump --all-databases | gzip > $BACKUP_DIR/mysql-$DATE.sql.gz
# Backup important config files
tar czf $BACKUP_DIR/configs-$DATE.tar.gz \
/etc/nginx \
/etc/mysql \
/etc/php \
/etc/ssl
# Backup web content
tar czf $BACKUP_DIR/www-$DATE.tar.gz /var/www
# Copy to remote location (adjust for your setup)
rsync -az $BACKUP_DIR/ backup-server:/backups/$(hostname)/
# Delete old local backups
find $BACKUP_DIR -name "*.gz" -mtime +$RETENTION_DAYS -delete
Run nightly. Keep 14 days local, sync everything offsite [web:79][web:82].
Pro tip: Schedule a monthly cron job that tests restoring from backup to a test environment. Because untested backups are useless backups.
Automatic Cleanup Tasks
Servers accumulate junk. Old package caches, temp files, expired sessions. Clean this stuff automatically:
#!/bin/bash
# /usr/local/bin/cleanup.sh
# Clean package cache
apt-get clean
apt-get autoclean
apt-get autoremove -y
# Clean old temp files (older than 7 days)
find /tmp -type f -atime +7 -delete
find /var/tmp -type f -atime +7 -delete
# Clean PHP sessions (if older than 30 days)
find /var/lib/php/sessions -type f -mtime +30 -delete
# Clean old log files not managed by logrotate
find /var/log -name "*.log.*" -mtime +30 -delete
Run weekly. Your disk will thank you [web:79].
Security: The Boring Stuff That Matters
An unattended server needs good security because you're not constantly watching for intrusions.
Essentials: fail2ban for automatic IP banning, proper firewall configuration, SSH key authentication only, disabled root login, automatic security updates (already covered).
Setup fail2ban:
apt install fail2ban
systemctl enable fail2ban
# Basic config that actually works
cat > /etc/fail2ban/jail.local <<EOF
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5
[sshd]
enabled = true
port = ssh
logpath = /var/log/auth.log
[nginx-http-auth]
enabled = true
EOF
systemctl restart fail2ban
Now brute force attempts automatically result in temporary bans. One less thing to worry about [web:82].
SSL Certificate Renewal
If you're running websites, you need SSL certificates. And they expire. Automate this with certbot:
apt install certbot python3-certbot-nginx
# Get certificate
certbot --nginx -d example.com
# Auto-renewal is configured by default
systemctl status certbot.timer
Certbot automatically renews certificates 30 days before expiration. Just make sure the timer is enabled [web:79].
The Reboot Question
Should servers automatically reboot for kernel updates? This is religious territory, but here's my take: schedule reboots monthly during maintenance windows, don't fully automate them.
Why? Because I want to be awake and available when a server reboots, just in case it doesn't come back up cleanly. Your mileage may vary.
If you do want automatic reboots after updates:
# In /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";
This reboots at 3 AM if needed. Make sure you have console access if something goes wrong [web:86].
Documentation: Future You Will Thank Present You
Here's something nobody does but everyone should: document your server configuration. When you log into a server you haven't touched in 18 months because something weird happened, you'll want notes.
Keep a simple text file at /root/README.txt
:
Server: web01.example.com
Purpose: Production website hosting
Setup date: 2025-10-18
Services:
- nginx on port 80/443
- MySQL on 3306 (local only)
- PHP 8.2-FPM
Automated tasks:
- Daily backups at 02:00
- Security updates via unattended-upgrades
- Weekly cleanup script
- Monthly disk usage report
Important paths:
- Website: /var/www/example.com
- Backups: /var/backups/daily
- Custom scripts: /usr/local/bin
Contact: admin@example.com
Sounds simple? It is. It's also incredibly valuable when you need it [web:82].
The Reality Check
A properly configured server can run for months or even years with minimal intervention. I have servers that I check on quarterly – they just work. But this requires: proper initial configuration, automated updates and backups, good monitoring with alerts, scheduled maintenance windows, documentation.
"Unattended" means the server handles routine maintenance itself. It doesn't mean you abandon it and hope for the best.
Set it up right once, and you'll spend your time building new things instead of firefighting old servers. That's the whole point, isn't it?
P.S. – Test your monitoring and backups regularly. The best automated system in the world is worthless if you don't verify it actually works. Trust, but verify.