Backup Security and Disaster Recovery for Ubuntu 22.04 & 24.04 Servers: Complete Protection Guide
Comprehensive step-by-step guide to implementing ransomware-resistant backup strategies on Ubuntu 22.04 and 24.04, including rsync automation, LUKS encryption, cloud integration with Restic, systemd timers, and disaster recovery procedures.
Backup Security and Disaster Recovery for Ubuntu 22.04 & 24.04 Servers
After years of working with Ubuntu servers and witnessing firsthand the devastating impact of ransomware attacks and data loss, I've learned that a robust backup strategy isn't optionalâit's your lifeline when disaster strikes. In 2025, with ransomware attacks on Linux servers increasing significantly and new variants like Gunra, Helldown, and SEXi specifically targeting Ubuntu systems, the stakes have never been higher. This guide walks you through building a comprehensive backup and disaster recovery system that's both secure and cost-effective, using battle-tested tools and techniques that have saved countless organizations from catastrophe.
Understanding the Foundation
Before diving into implementation, you need to understand what makes a backup system truly resilient. Modern Ubuntu servers face threats from multiple anglesâransomware that encrypts your data, hardware failures that corrupt filesystems, and human errors that delete critical files. Your backup strategy needs to address all these scenarios while remaining practical enough to implement and maintain. The key is layered defense: multiple backup destinations, encryption at rest and in transit, immutable backups that ransomware can't touch, and regular testing to ensure your backups actually work when you need them.
Setting up your environment requires Ubuntu 22.04 LTS or 24.04 LTS with at least 4GB of RAM and sufficient storage for your backup retention needs. You'll need root or sudo access to configure system services, and ideally, you should have a separate backup destinationâwhether that's a network-attached storage device, a remote server, or cloud storage. The beauty of Ubuntu's ecosystem is that all the tools we'll use are available in the standard repositories, making installation straightforward and updates automatic.
Start by creating a dedicated backup user that will handle all backup operations. This separation of privileges is crucial for securityâif your main system gets compromised, the attacker won't automatically have access to modify or delete your backups. Execute these commands to set up the backup user with appropriate permissions:
sudo useradd -r -s /bin/false -d /var/lib/backup backup
sudo usermod -L backup
sudo mkdir -p /var/lib/backup/.ssh
sudo chown -R backup:backup /var/lib/backup
sudo chmod 700 /var/lib/backup/.ssh
This creates a system user without a login shell, locks password authentication, and sets up an SSH directory for key-based authentication. The -r
flag creates a system account with a UID below 1000, which Ubuntu treats differently for security purposes. The locked password ensures this account can only be accessed through sudo or SSH keys, never through direct login.
Implementing Rsync for Efficient Backups
Rsync forms the backbone of many backup solutions because of its efficiency and reliability. Unlike simple copy commands, rsync only transfers changed data, making subsequent backups lightning fast. I've seen organizations reduce their backup windows from hours to minutes just by switching to rsync with proper configuration. The real magic happens when you combine rsync with hard links, creating space-efficient incremental backups that appear as complete snapshots.
Create a comprehensive rsync backup script that handles everything from error checking to retention policies. Save this as /usr/local/bin/rsync-backup.sh
:
#!/bin/bash
SOURCE_DIR="/home /etc /var/www"
BACKUP_BASE="/srv/backups"
EXCLUDE_FILE="/etc/rsync-exclude.txt"
LOG_FILE="/var/log/rsync-backup.log"
EMAIL="admin@example.com"
RETENTION_DAYS=30
NICE_LEVEL=19
IONICE_CLASS=3
error_exit() {
echo "ERROR: $1" | tee -a "$LOG_FILE"
echo "Backup failed on $(date)" | mail -s "Backup Failed" "$EMAIL"
exit 1
}
BACKUP_DIR="$BACKUP_BASE/backup-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR" || error_exit "Failed to create backup directory"
DISK_USAGE=$(df "$BACKUP_BASE" | tail -1 | awk '{print $(NF-1)}' | sed 's/%//')
[ "$DISK_USAGE" -gt 80 ] && error_exit "Insufficient disk space: ${DISK_USAGE}% used"
echo "Starting backup at $(date)" | tee -a "$LOG_FILE"
nice -n "$NICE_LEVEL" ionice -c "$IONICE_CLASS" \
rsync -avHAXS \
--numeric-ids \
--delete \
--delete-excluded \
--exclude-from="$EXCLUDE_FILE" \
--link-dest="$BACKUP_BASE/latest" \
--log-file="$LOG_FILE" \
--stats \
$SOURCE_DIR \
"$BACKUP_DIR/" || error_exit "rsync failed"
rm -f "$BACKUP_BASE/latest"
ln -s "$BACKUP_DIR" "$BACKUP_BASE/latest"
find "$BACKUP_BASE" -maxdepth 1 -type d -name "backup-*" -mtime +$RETENTION_DAYS -exec rm -rf {} \;
echo "Backup completed successfully at $(date)" | tee -a "$LOG_FILE"
This script implements several critical features that make it production-ready. The nice and ionice commands ensure backups don't impact system performanceâthey tell the kernel to give backup processes lower priority for CPU and disk I/O respectively. The --link-dest
option creates hard links to unchanged files from the previous backup, dramatically reducing storage requirements while maintaining complete point-in-time snapshots. The numeric-ids flag preserves ownership information even when UIDs don't match between systems, crucial for disaster recovery scenarios.
Now create the exclude file at /etc/rsync-exclude.txt
to prevent backing up unnecessary data:
/proc/*
/sys/*
/dev/*
/run/*
/tmp/*
/var/tmp/*
*/.cache/*
*/cache/*
node_modules/
.git/
*.tmp
*.swp
These exclusions remove system directories that are recreated at boot, temporary files, and development artifacts that don't need backing up. Cache directories and version control repositories can consume massive amounts of space without providing value in backups. The beauty of rsync is that you can test your exclude patterns with the --dry-run
flag before committing to actual backups.
Securing Backups with Encryption and Immutability
Encryption transforms your backups from potential liability into secure assets. Even if attackers steal your backup media or compromise your cloud storage, properly encrypted backups remain useless to them. Ubuntu provides multiple encryption layers, from filesystem-level encryption with LUKS to application-level encryption with GPG. I prefer combining both for defense in depthâLUKS protects the storage medium while GPG protects individual backup archives.
Setting up LUKS encryption for your backup partition requires careful planning but provides transparent encryption that's invisible to applications. Create an encrypted backup volume with these commands:
sudo apt install cryptsetup cryptsetup-initramfs
sudo cryptsetup luksFormat --type=luks1 --hash=sha256 --pbkdf=pbkdf2 /dev/sdb1
sudo cryptsetup luksOpen /dev/sdb1 backup_crypt
sudo mkfs.ext4 /dev/mapper/backup_crypt
sudo mkdir -p /etc/luks
sudo dd if=/dev/urandom of=/etc/luks/backup.keyfile bs=512 count=1
sudo chmod 400 /etc/luks/backup.keyfile
sudo cryptsetup luksAddKey /dev/sdb1 /etc/luks/backup.keyfile
echo "backup_crypt UUID=$(blkid -s UUID -o value /dev/sdb1) /etc/luks/backup.keyfile luks,discard" >> /etc/crypttab
echo "/dev/mapper/backup_crypt /backup ext4 defaults 0 2" >> /etc/fstab
This configuration creates a LUKS-encrypted partition that automatically unlocks at boot using a keyfile. The keyfile itself needs protectionâstore it on the root filesystem which should be encrypted separately, or better yet, on a hardware security module if available. The luks1 format ensures compatibility with GRUB bootloader if you ever need to boot from this backup. The pbkdf2 key derivation function provides good security while maintaining reasonable unlock speeds.
Beyond encryption, immutable backups provide crucial protection against ransomware. Even if attackers gain root access, they cannot modify or delete immutable files without first removing the immutable flagâan action that can trigger alerts. Implement immutability with filesystem attributes:
#!/bin/bash
BACKUP_DIR="/backup/$(date +%Y%m%d)"
tar czf "$BACKUP_DIR/system-backup.tar.gz" /etc /home /var/www
chattr +i "$BACKUP_DIR"
logger "Backup directory $BACKUP_DIR set to immutable"
# To remove immutability when needed for cleanup:
# find /backup -type d -mtime +30 -exec chattr -i {} \;
# find /backup -type d -mtime +30 -exec rm -rf {} \;
The immutable attribute prevents any modification, even by root. This protection extends to the kernel levelâonly processes with the CAP_LINUX_IMMUTABLE capability can change these attributes. Smart ransomware might try to remove immutability, but this action generates audit logs that your monitoring system can catch. Combine immutability with regular snapshots on ZFS or Btrfs for even stronger protection.
Protecting Against Ransomware
Modern ransomware specifically targets backup systems, knowing that organizations with good backups are less likely to pay ransoms. The SEXi ransomware variant, for instance, specifically looks for and encrypts VMware backup files on Linux hosts. Your defense strategy needs multiple layers: AppArmor confinement to limit what backup processes can access, file integrity monitoring to detect unauthorized changes, and air-gapped backups that remain physically disconnected from the network.
Configure AppArmor profiles to restrict backup script capabilities while still allowing necessary operations. Create /etc/apparmor.d/usr.local.bin.backup
with appropriate restrictions:
#include <tunables/global>
/usr/local/bin/backup.sh {
#include <abstractions/base>
#include <abstractions/bash>
capability dac_read_search,
capability setuid,
capability setgid,
/backup/** rwk,
/var/backups/** rwk,
/etc/** r,
/var/lib/** r,
/home/** r,
/var/lib/mysql/** r,
/usr/bin/mysqldump ix,
/usr/bin/ssh ix,
/usr/bin/rsync ix,
/home/backup/.ssh/** r,
deny /etc/shadow r,
deny /etc/gshadow r,
deny /proc/*/mem r,
deny /dev/mem r,
/var/log/backup.log w,
}
This profile allows the backup script to read system files and write to backup locations while preventing access to sensitive authentication files and memory. Apply the profile with sudo apparmor_parser -r /etc/apparmor.d/usr.local.bin.backup
and enforce it with sudo aa-enforce /usr/local/bin/backup.sh
. AppArmor operates at the kernel level, making it extremely difficult for userspace ransomware to bypass.
Implement comprehensive ransomware detection that monitors for suspicious patterns. Save this script as /usr/local/bin/ransomware-detection.sh
:
#!/bin/bash
LOG_FILE="/var/log/ransomware-detection.log"
ALERT_EMAIL="security@company.com"
SUSPICIOUS_EXTENSIONS=(".encrypted" ".locked" ".crypto" ".crypt" ".enc")
for ext in "${SUSPICIOUS_EXTENSIONS[@]}"; do
if find /home /var /opt -name "*$ext" -type f -mtime -1 | grep -q .; then
echo "$(date): Suspicious files with extension $ext detected" >> "$LOG_FILE"
mail -s "RANSOMWARE ALERT: Suspicious files detected" "$ALERT_EMAIL" < "$LOG_FILE"
fi
done
RANSOM_INDICATORS=("DECRYPT" "RANSOM" "PAYMENT" "BITCOIN")
for indicator in "${RANSOM_INDICATORS[@]}"; do
if find /home /var /tmp -name "*$indicator*" -type f -mtime -1 | grep -q .; then
echo "$(date): Potential ransom note detected containing $indicator" >> "$LOG_FILE"
mail -s "RANSOMWARE ALERT: Ransom note detected" "$ALERT_EMAIL" < "$LOG_FILE"
fi
done
if ps aux | grep -E "(gpg|openssl|cryptsetup)" | grep -v grep | wc -l > 5; then
echo "$(date): High encryption activity detected" >> "$LOG_FILE"
fi
Schedule this script to run every 15 minutes via cron: */15 * * * * root /usr/local/bin/ransomware-detection.sh
. Early detection can mean the difference between isolating a single infected system and losing your entire infrastructure. The script looks for common ransomware indicatorsâencrypted file extensions, ransom notes with payment instructions, and abnormal encryption process activity.
Cloud Integration and Remote Backups
Cloud storage has revolutionized backup strategies by providing offsite storage without the hassle of tape rotation or physical media management. Backblaze B2 stands out for Ubuntu backups with its S3-compatible API, free egress up to three times your storage amount, and pricing at just $6 per terabyte per month. Combine cloud storage with tools like Restic or Duplicity for encrypted, deduplicated backups that minimize both storage costs and bandwidth usage.
Restic excels at cloud backups with its built-in encryption, deduplication, and support for numerous backends. Install and configure Restic for Backblaze B2:
sudo apt update
sudo apt install restic
export B2_ACCOUNT_ID="your_account_id"
export B2_ACCOUNT_KEY="your_account_key"
export RESTIC_PASSWORD="your-secure-password"
restic init -r b2:your-bucket-name:/restic-repo
cat > /usr/local/bin/restic-backup.sh << 'EOF'
#!/bin/bash
export B2_ACCOUNT_ID="your_account_id"
export B2_ACCOUNT_KEY="your_account_key"
export RESTIC_PASSWORD_FILE="/root/.restic-password"
restic backup /home /etc /var/www \
--exclude-caches \
--exclude-file=/etc/restic-exclude.txt \
-r b2:your-bucket-name:/restic-repo
restic forget \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 12 \
--prune \
-r b2:your-bucket-name:/restic-repo
echo "Backup completed at $(date)"
EOF
chmod +x /usr/local/bin/restic-backup.sh
echo "your-secure-password" > /root/.restic-password
chmod 600 /root/.restic-password
Restic automatically encrypts all data before upload using AES-256 in counter mode with Poly1305 authentication. The repository format includes deduplication at the chunk level, meaning identical data blocks are only stored once regardless of how many backups contain them. This deduplication typically reduces storage requirements by 30-60%, depending on how much your data changes between backups.
For databases, implement application-specific backup procedures that ensure consistency. MySQL and PostgreSQL require special handling to avoid backing up corrupted data. Create a database backup script that handles both MySQL and PostgreSQL:
#!/bin/bash
DB_BACKUP_DIR="/var/backups/databases"
RETENTION_DAYS=7
mkdir -p "$DB_BACKUP_DIR"
# MySQL backup
if systemctl is-active --quiet mysql; then
MYSQL_DATABASES=$(mysql -e "SHOW DATABASES;" | grep -v -E "^(Database|information_schema|performance_schema|mysql|sys)$")
for db in $MYSQL_DATABASES; do
mysqldump --single-transaction \
--routines \
--triggers \
--events \
--add-drop-database \
--create-options \
"$db" | gzip > "$DB_BACKUP_DIR/${db}_$(date +%Y%m%d).sql.gz"
done
fi
# PostgreSQL backup
if systemctl is-active --quiet postgresql; then
sudo -u postgres pg_dumpall | gzip > "$DB_BACKUP_DIR/postgresql_all_$(date +%Y%m%d).sql.gz"
fi
find "$DB_BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
The --single-transaction
flag for MySQL ensures consistent backups without locking tables, crucial for production databases. For PostgreSQL, pg_dumpall
captures all databases plus global objects like roles and tablespaces. Both commands produce SQL scripts that can recreate your databases from scratch, providing maximum flexibility for recovery scenarios.
Advanced Scheduling with Systemd
While cron remains popular, systemd timers offer superior scheduling capabilities for modern Ubuntu systems. Timers provide better logging, dependency management, and failure handling than traditional cron jobs. They also integrate seamlessly with systemd's journal, making troubleshooting much easier.
Create a systemd service unit for backups at /etc/systemd/system/backup.service
:
[Unit]
Description=System Backup
OnFailure=backup-failure-notification.service
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
User=root
ExecStartPre=/usr/bin/test -x /usr/local/bin/rsync-backup.sh
ExecStart=/usr/local/bin/rsync-backup.sh
ExecStartPost=/usr/local/bin/backup-verify.sh
Environment=BACKUP_RETENTION_DAYS=30
Environment=BACKUP_DEST=/srv/backups
StandardOutput=journal
StandardError=journal
TimeoutStartSec=3h
Now create the timer unit at /etc/systemd/system/backup.timer
:
[Unit]
Description=Daily System Backup
Requires=backup.service
[Timer]
OnCalendar=daily
RandomizedDelaySec=1800
Persistent=true
AccuracySec=1h
[Install]
WantedBy=timers.target
The timer configuration runs backups daily with a random delay up to 30 minutes, preventing all servers from starting backups simultaneously and overwhelming your backup infrastructure. The Persistent directive ensures missed backups run when the system comes back online, crucial for systems that don't run 24/7. Enable the timer with sudo systemctl enable --now backup.timer
.
Create a notification service for backup failures at /etc/systemd/system/backup-failure-notification.service
:
[Unit]
Description=Backup Failure Email Notification
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'systemctl status backup.service | mail -s "Backup Failed on $(hostname)" admin@company.com'
This notification service automatically triggers when the backup service fails, sending detailed failure information to administrators. The systemd journal captures all output, making it easy to investigate issues: sudo journalctl -u backup.service -u backup.timer --since yesterday
.
Monitoring and Verification
Untested backups are merely hopes and prayersâyou need regular verification to ensure recovery actually works when disaster strikes. I've seen too many organizations discover their backups were corrupted or incomplete only when trying to recover from ransomware. Implement automated verification that tests both backup integrity and recovery procedures.
Create a comprehensive monitoring script at /usr/local/bin/backup-monitor.sh
:
#!/bin/bash
BACKUP_DIR="/srv/backups"
LOG_FILE="/var/log/backup-monitor.log"
EMAIL="admin@example.com"
WARNING_AGE=86400 # 24 hours in seconds
MIN_BACKUP_SIZE=1048576 # 1MB minimum
check_recent_backups() {
local current_time=$(date +%s)
local backup_found=false
for backup in $(find "$BACKUP_DIR" -name "*.tar.gz" -type f); do
local backup_time=$(stat -c %Y "$backup")
local backup_age=$((current_time - backup_time))
local backup_size=$(stat -c %s "$backup")
if [ $backup_age -le $WARNING_AGE ]; then
backup_found=true
if [ $backup_size -lt $MIN_BACKUP_SIZE ]; then
echo "WARNING: Backup size too small: $(basename "$backup")" >> "$LOG_FILE"
echo "Backup size warning: $(basename "$backup") is only ${backup_size} bytes" | \
mail -s "Backup Size Warning" "$EMAIL"
fi
fi
done
if [ "$backup_found" = false ]; then
echo "ERROR: No recent backups found!" >> "$LOG_FILE"
echo "No backups found in last 24 hours!" | mail -s "Backup Missing Alert" "$EMAIL"
fi
}
check_backup_integrity() {
for backup in $(find "$BACKUP_DIR" -name "*.tar.gz" -mtime -3 -type f); do
if ! tar -tzf "$backup" > /dev/null 2>&1; then
echo "ERROR: Integrity check failed for $(basename "$backup")" >> "$LOG_FILE"
echo "Backup corruption detected: $(basename "$backup")" | \
mail -s "Backup Integrity Alert" "$EMAIL"
fi
done
}
check_disk_space() {
local usage=$(df "$BACKUP_DIR" | tail -1 | awk '{print $(NF-1)}' | sed 's/%//')
if [ $usage -gt 85 ]; then
echo "WARNING: Backup disk usage at ${usage}%" >> "$LOG_FILE"
echo "Backup disk space critical: ${usage}% used" | \
mail -s "Disk Space Warning" "$EMAIL"
fi
}
echo "$(date): Starting backup monitoring" >> "$LOG_FILE"
check_recent_backups
check_backup_integrity
check_disk_space
echo "$(date): Monitoring complete" >> "$LOG_FILE"
Schedule this monitor to run every four hours: 0 */4 * * * root /usr/local/bin/backup-monitor.sh
. The script verifies that backups exist, checks their integrity by reading the archive structure, and monitors available disk space. Early warning about disk space issues prevents backup failures that might go unnoticed until it's too late.
Troubleshooting Common Issues
Even well-designed backup systems encounter problems. Network interruptions break remote backups, permission changes prevent file access, and disk space issues cause silent failures. Understanding common failure modes and their solutions helps you recover quickly when issues arise.
Permission problems plague backup systems, especially after system updates or configuration changes. When backups suddenly fail with permission denied errors, systematically check and correct permissions:
#!/bin/bash
# Fix common permission issues
chown -R backup:backup /srv/backups
chmod -R 750 /srv/backups
# Fix SSH key permissions
chmod 600 /root/.ssh/backup_rsa
chmod 644 /root/.ssh/backup_rsa.pub
chown root:root /root/.ssh/backup_rsa*
# Fix GPG permissions
chmod 700 /root/.gnupg
chmod 600 /root/.gnupg/*
chown -R root:root /root/.gnupg
# Verify and fix SELinux contexts if enabled
if command -v getenforce > /dev/null && [ "$(getenforce)" != "Disabled" ]; then
restorecon -Rv /srv/backups
restorecon -Rv /root/.ssh
fi
Network connectivity issues require systematic testing to identify the failure point. Test each component of your backup path:
#!/bin/bash
REMOTE_HOST="backup-server.example.com"
SSH_KEY="/root/.ssh/backup_rsa"
echo "Testing backup connectivity..."
# Basic network connectivity
if ! ping -c 4 "$REMOTE_HOST" > /dev/null 2>&1; then
echo "Network unreachable - check routing and firewall rules"
ip route get "$REMOTE_HOST"
exit 1
fi
# SSH connectivity
if ! ssh -i "$SSH_KEY" -o ConnectTimeout=10 "backup@$REMOTE_HOST" "echo 'SSH OK'" > /dev/null 2>&1; then
echo "SSH connection failed - verify key authentication"
ssh -vv -i "$SSH_KEY" "backup@$REMOTE_HOST" exit 2>&1 | grep -i "debug\|error"
exit 1
fi
# Storage accessibility
AVAILABLE=$(ssh -i "$SSH_KEY" "backup@$REMOTE_HOST" "df -h /srv/backups | tail -1")
echo "Remote storage status: $AVAILABLE"
This diagnostic script helps pinpoint whether issues stem from network configuration, SSH authentication, or storage problems. The verbose SSH output reveals authentication failures, while the df command confirms the remote storage is mounted and accessible.
Recovery testing validates your entire backup strategy. Create a test recovery script that performs non-destructive recovery verification:
#!/bin/bash
TEST_DIR="/tmp/recovery-test-$(date +%s)"
mkdir -p "$TEST_DIR"
echo "Testing backup recovery procedures..."
# Test file recovery
LATEST_BACKUP=$(ls -t /srv/backups/backup-*/etc/passwd | head -1)
if [ -f "$LATEST_BACKUP" ]; then
cp "$LATEST_BACKUP" "$TEST_DIR/"
if diff /etc/passwd "$TEST_DIR/passwd" > /dev/null; then
echo "â File recovery successful"
else
echo "â Recovery verification failed - files don't match"
fi
fi
# Test database recovery
if [ -f /var/backups/databases/myapp_$(date +%Y%m%d).sql.gz ]; then
gunzip -c /var/backups/databases/myapp_$(date +%Y%m%d).sql.gz > "$TEST_DIR/test.sql"
if mysql -e "CREATE DATABASE test_recovery"; then
mysql test_recovery < "$TEST_DIR/test.sql"
TABLE_COUNT=$(mysql -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='test_recovery'" | tail -1)
echo "â Database recovery successful - $TABLE_COUNT tables restored"
mysql -e "DROP DATABASE test_recovery"
fi
fi
rm -rf "$TEST_DIR"
echo "Recovery testing complete"
Building Your Recovery Strategy
After implementing these backup systems, document your recovery procedures thoroughly. When disaster strikes, you won't have time to figure out recovery stepsâyou need clear, tested procedures that anyone on your team can follow. Create a recovery runbook that includes emergency contacts, system information, step-by-step recovery procedures for different scenarios, and validation steps to confirm successful recovery.
The combination of rsync for efficient file backups, Restic for encrypted cloud storage, systemd timers for reliable scheduling, and comprehensive monitoring creates a backup system that can withstand ransomware attacks, hardware failures, and human errors. Regular testing ensures your backups work when needed, while encryption and immutability protect against sophisticated attacks.
Remember that backup strategies evolve with your infrastructure. What works for a single server won't scale to hundreds, and what's appropriate for development systems might be insufficient for production databases. Start with the fundamentals presented here, then adapt and expand based on your specific requirements. The key is to begin nowâevery day without proper backups is a day you're gambling with your data's survival. With Ubuntu's robust toolset and the techniques covered in this guide, you have everything needed to build a backup system that provides real protection against the threats facing modern servers.