Container Security with Docker on Ubuntu: Best Practices and Tools Implementation Guide
Master Docker container security on Ubuntu with comprehensive daemon hardening, runtime threat detection, vulnerability scanning, network isolation, secrets management, and enterprise compliance. Includes CI/CD integration and troubleshooting.
Container Security with Docker on Ubuntu: Building Fortress-Grade Protection
After years of working with Docker security in production environments, I've learned that container security isn't just about running security scanners or following generic best practices. It's about understanding how each layer of protection fits together to create a comprehensive defense strategy that actually works in the real world. This guide walks through implementing production-ready Docker security on Ubuntu, covering everything from daemon hardening to runtime threat detection, with battle-tested configurations that you can deploy today.
Setting the Foundation
Container security starts before you even pull your first image. Over the years, I've seen countless breaches that could have been prevented with proper daemon configuration, so let's begin by securing the Docker daemon itself. The Docker daemon runs with root privileges by default, making it a prime target for attackers. What many teams don't realize is that Ubuntu's default Docker installation leaves several security features disabled for convenience.
Your first step should be creating a comprehensive daemon configuration that enforces security from the ground up. Start by creating or modifying your Docker daemon configuration file at /etc/docker/daemon.json. This configuration enables critical security features that should be standard in any production environment:
{
"log-level": "info",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"live-restore": true,
"userns-remap": "default",
"seccomp-profile": "builtin",
"selinux-enabled": false,
"icc": false,
"userland-proxy": false,
"no-new-privileges": true,
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"default-ulimits": {
"nofile": {
"Hard": 64000,
"Name": "nofile",
"Soft": 64000
}
},
"exec-opts": ["native.cgroupdriver=systemd"],
"containerd": "/run/containerd/containerd.sock"
}
The most critical setting here is userns-remap, which creates a dedicated user namespace for containers. This means that even if an attacker breaks out of a container, they'll only have unprivileged access on the host system. Setting this up requires creating subordinate UID and GID mappings:
sudo adduser --system --no-create-home --group dockremap
echo 'dockremap:231072:65536' | sudo tee /etc/subuid
echo 'dockremap:231072:65536' | sudo tee /etc/subgid
sudo systemctl restart docker
After restarting Docker, verify the configuration with docker info | grep "Security Options". You should see user namespaces enabled along with other security features. This single configuration change eliminates entire classes of container escape vulnerabilities.
Implementing Runtime Security Monitoring
Now that we've hardened the daemon, let's implement runtime security monitoring using Falco, which has become the de facto standard for container runtime security. What I particularly appreciate about Falco is its ability to detect threats at the kernel level using eBPF, providing visibility that traditional monitoring tools miss. The modern deployment method uses a container-based approach that requires minimal host modification:
docker run -d --name falco \
--privileged \
-v /var/run/docker.sock:/host/var/run/docker.sock \
-v /proc:/host/proc:ro \
-v /etc:/host/etc:ro \
-e HOST_ROOT=/ \
--restart always \
falcosecurity/falco:latest
Falco's real power comes from custom rules tailored to your environment. After deploying hundreds of containers in production, I've developed rules that catch the most common attack patterns. Create a custom rules file at /etc/falco/falco_rules.local.yaml with rules specific to container threats:
- rule: Container Escape via Mount Namespace
desc: Detect attempts to escape container via mount operations
condition: >
spawned_process and container and
proc.name in (mount, umount) and
(proc.args contains "/dev" or
proc.args contains "/proc" or
proc.args contains "/sys" or
proc.args contains "/host")
output: >
Container escape via mount detected (user=%user.name command=%proc.cmdline
container=%container.name image=%container.image.repository pid=%proc.pid)
priority: CRITICAL
tags: [containers, escape, security]
- rule: Container with Sensitive Mount Started
desc: Alert on containers started with dangerous mounts
condition: >
spawned_process and container and
(fd.name contains "/etc" or
fd.name contains "/var/run/docker.sock" or
fd.name contains "/proc" or
fd.name contains "/sys")
output: >
Container with sensitive mount started (user=%user.name command=%proc.cmdline
container_id=%container.id container_name=%container.name image=%container.image.repository)
priority: ERROR
tags: [containers, mount, security]
- rule: Container Privilege Escalation
desc: Detect privilege escalation attempts in containers
condition: >
spawned_process and container and
((proc.name=sudo and not proc.pname=systemd) or
proc.name=su or
(proc.name in (chmod, chown) and proc.args contains "+s"))
output: >
Privilege escalation in container (user=%user.name command=%proc.cmdline
container=%container.name image=%container.image.repository)
priority: WARNING
tags: [containers, privilege, escalation]
To make monitoring actionable, deploy Falcosidekick alongside Falco for a web-based dashboard that aggregates security events:
docker run -d --name falcosidekick \
-p 2801:2801 \
-e WEBUI_URL=http://localhost:2802 \
--restart always \
falcosecurity/falcosidekick:latest
docker run -d --name falcosidekick-ui \
-p 2802:2802 \
-e FALCOSIDEKICK_URL=http://localhost:2801 \
--restart always \
falcosecurity/falcosidekick-ui:latest
Then update your Falco configuration to send events to Falcosidekick. Edit /etc/falco/falco.yaml and add:
http_output:
enabled: true
url: http://localhost:2801
Securing Container Images with Vulnerability Scanning
Image vulnerability scanning has evolved significantly over the past few years. Today's tools go beyond simple CVE matching to provide comprehensive software bill of materials (SBOM) analysis and policy enforcement. I've found that using multiple scanners provides the best coverage, as each tool has different strengths and vulnerability databases.
Start with Trivy, which offers the best balance of speed, accuracy, and ease of use. Install it on Ubuntu using the official APT repository:
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo "deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update && sudo apt-get install trivy
Create a comprehensive Trivy configuration file at /etc/trivy/trivy.yaml that enforces your security policies:
severity: CRITICAL,HIGH,MEDIUM
ignore-unfixed: true
exit-code: 1
format: json
output: security-report.json
skip-db-update: false
cache-dir: /tmp/trivy-cache
security-checks: vuln,config,secret
ignore:
- CVE-2021-44228 # Log4j - if already mitigated
For continuous integration, integrate Trivy into your build process. Here's a Docker build script that includes security scanning:
#!/bin/bash
# secure-docker-build.sh
IMAGE_NAME="myapp"
IMAGE_TAG="latest"
echo "Building Docker image..."
docker build -t ${IMAGE_NAME}:${IMAGE_TAG} .
echo "Scanning for vulnerabilities..."
trivy image --severity CRITICAL,HIGH --exit-code 1 ${IMAGE_NAME}:${IMAGE_TAG}
if [ $? -eq 0 ]; then
echo "No critical vulnerabilities found. Image is safe to use."
# Additional scanning with Docker Scout
docker scout quickview ${IMAGE_NAME}:${IMAGE_TAG}
docker scout cves --only-severity critical,high ${IMAGE_NAME}:${IMAGE_TAG}
# Generate SBOM for supply chain security
docker scout sbom --format spdx-json --output sbom.json ${IMAGE_NAME}:${IMAGE_TAG}
else
echo "Critical vulnerabilities detected! Build failed."
exit 1
fi
Docker Scout provides native integration with Docker Desktop and offers unique insights into base image recommendations. Enable it for your images:
# Enable Docker Scout for an organization
docker scout enroll myorganization
# Analyze image and get recommendations
docker scout quickview nginx:latest
docker scout recommendations nginx:latest
# Compare images for security improvements
docker scout compare nginx:1.20 nginx:latest
Building Secure Images with Multi-Stage Builds
The way you build images dramatically impacts their security posture. Multi-stage builds have become my go-to approach for creating minimal, secure container images. The technique involves using one container to build your application and another, minimal container for runtime, eliminating build tools and reducing attack surface by up to 90%.
Here's a production-ready multi-stage Dockerfile for a Go application that implements multiple security best practices:
# syntax=docker/dockerfile:1
# Build stage with full toolchain
FROM golang:1.21-alpine AS builder
# Security: Create non-root user for build
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Install CA certificates and git for private repos
RUN apk add --no-cache ca-certificates git
WORKDIR /src
# Copy go mod files first for better caching
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build static binary with security flags
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-ldflags='-w -s -extldflags "-static"' \
-a -installsuffix cgo -o app .
# Test stage (optional but recommended)
FROM builder AS test
RUN go test -v ./...
# Production stage - distroless for minimal attack surface
FROM gcr.io/distroless/static-debian12:nonroot
WORKDIR /
COPY --from=builder /src/app /app
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
USER nonroot:nonroot
ENTRYPOINT ["/app"]
The distroless base image contains only your application and runtime dependencies, with no shell, package manager, or other utilities that attackers could exploit. This approach has prevented countless security incidents in my deployments.
For Node.js applications, the pattern is similar but includes additional security considerations:
# syntax=docker/dockerfile:1
# Dependencies stage
FROM node:18-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
# Install only production dependencies
RUN npm ci --only=production && npm cache clean --force
# Build stage
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build && npm run test
# Security scanning stage
FROM build AS security-scan
RUN npm audit --audit-level moderate
# Production stage
FROM gcr.io/distroless/nodejs18-debian12:nonroot
WORKDIR /app
COPY --from=dependencies --chown=nonroot:nonroot /app/node_modules ./node_modules
COPY --from=build --chown=nonroot:nonroot /app/dist ./dist
COPY --from=build --chown=nonroot:nonroot /app/package*.json ./
USER nonroot:nonroot
EXPOSE 3000
CMD ["dist/index.js"]
Implementing Network Security and Isolation
Network security is where many container deployments fail. Docker's default bridge network allows all containers to communicate freely, which is convenient for development but dangerous in production. After investigating several container breaches, I've found that proper network segmentation could have prevented most of them.
Start by disabling inter-container communication on the default bridge and creating isolated networks for different application tiers:
# Create frontend network with restricted access
docker network create \
--driver bridge \
--subnet 10.1.0.0/24 \
--ip-range 10.1.0.0/25 \
--gateway 10.1.0.1 \
--opt com.docker.network.bridge.enable_icc=false \
frontend-net
# Create backend network (internal only)
docker network create \
--driver bridge \
--internal \
--subnet 10.2.0.0/24 \
backend-net
# Create database network (highly isolated)
docker network create \
--driver bridge \
--internal \
--subnet 10.3.0.0/24 \
--opt com.docker.network.bridge.enable_icc=false \
database-net
Ubuntu's UFW firewall doesn't play nicely with Docker by default, as Docker manipulates iptables directly. This is a critical security gap that many teams overlook. To fix this, you need to integrate UFW with Docker properly. First, install the UFW-Docker integration:
sudo wget -O /usr/local/bin/ufw-docker \
https://github.com/chaifeng/ufw-docker/raw/master/ufw-docker
sudo chmod +x /usr/local/bin/ufw-docker
sudo ufw-docker install
sudo ufw enable
Then configure UFW rules for your containers:
# Allow external access to web container on port 443
sudo ufw route allow proto tcp from any to any port 443
# Allow app tier to access database
sudo ufw route allow proto tcp from 10.2.0.0/24 to 10.3.0.0/24 port 5432
# Block all other inter-network communication
sudo ufw route deny from 10.1.0.0/24 to 10.3.0.0/24
Managing Secrets the Right Way
Secrets management remains one of the most challenging aspects of container security. I've seen too many production deployments with database passwords in environment variables or, worse, hardcoded in images. Modern secret management requires a combination of tools and practices that protect secrets at rest, in transit, and in use.
For production deployments, HashiCorp Vault has become my standard choice. It provides dynamic secrets, automatic rotation, and detailed audit logs. Deploy Vault using Docker Compose with proper security configurations:
version: '3.8'
services:
vault:
image: hashicorp/vault:1.18.3
container_name: vault
cap_add:
- IPC_LOCK
environment:
VAULT_LOCAL_CONFIG: |
{
"backend": {
"file": {
"path": "/vault/file"
}
},
"listener": {
"tcp": {
"address": "0.0.0.0:8200",
"tls_cert_file": "/vault/certs/vault.crt",
"tls_key_file": "/vault/certs/vault.key"
}
},
"ui": true,
"disable_mlock": false,
"disable_cache": false
}
ports:
- "8200:8200"
volumes:
- vault-data:/vault/file
- ./certs:/vault/certs:ro
command: server
restart: always
volumes:
vault-data:
Initialize and configure Vault for container use:
# Initialize Vault
export VAULT_ADDR='https://localhost:8200'
vault operator init -key-shares=5 -key-threshold=3
# Unseal Vault (use at least 3 of the 5 keys)
vault operator unseal <key-1>
vault operator unseal <key-2>
vault operator unseal <key-3>
# Login with root token
vault login <root-token>
# Enable KV secrets engine
vault secrets enable -path=secret kv-v2
# Create a policy for containers
vault policy write container-policy - <<EOF
path "secret/data/containers/*" {
capabilities = ["read", "list"]
}
EOF
# Enable AppRole auth for containers
vault auth enable approle
vault write auth/approle/role/container \
token_policies="container-policy" \
token_ttl=1h \
token_max_ttl=4h
For containers to access Vault secrets, use the Vault Agent sidecar pattern. Create a Vault Agent configuration:
pid_file = "/tmp/pidfile"
auto_auth {
method "approle" {
mount_path = "auth/approle"
config = {
role_id_file_path = "/vault/role-id"
secret_id_file_path = "/vault/secret-id"
}
}
sink "file" {
config = {
path = "/vault/token"
}
}
}
template {
source = "/vault/templates/db-config.tpl"
destination = "/secrets/db-config.json"
perms = "0400"
user = "1000"
group = "1000"
}
Deploy your application with the Vault Agent sidecar:
version: '3.8'
services:
vault-agent:
image: hashicorp/vault:latest
container_name: vault-agent
environment:
VAULT_ADDR: https://vault:8200
volumes:
- ./vault-agent.hcl:/vault/config.hcl:ro
- ./role-id:/vault/role-id:ro
- ./secret-id:/vault/secret-id:ro
- shared-secrets:/secrets
command: ["vault", "agent", "-config=/vault/config.hcl"]
restart: always
app:
image: myapp:latest
depends_on:
- vault-agent
volumes:
- shared-secrets:/secrets:ro
restart: always
volumes:
shared-secrets:
driver: local
driver_opts:
type: tmpfs
device: tmpfs
o: size=10m,mode=0700
Implementing AppArmor Profiles for Container Hardening
AppArmor provides mandatory access control that can prevent containers from accessing resources they shouldn't touch. Ubuntu comes with AppArmor enabled by default, but Docker's default profile is quite permissive. Creating custom AppArmor profiles for your containers adds another layer of defense that has saved me from several potential breaches.
Create a custom AppArmor profile for an nginx container at /etc/apparmor.d/containers/docker-nginx:
#include <tunables/global>
profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
network inet tcp,
network inet udp,
network inet icmp,
deny network raw,
deny network packet,
file,
umount,
# Deny access to sensitive directories
deny /bin/** wl,
deny /boot/** wl,
deny /dev/** wl,
deny /etc/** wl,
deny /home/** wl,
deny /lib/** wl,
deny /lib64/** wl,
deny /media/** wl,
deny /mnt/** wl,
deny /opt/** wl,
deny /root/** wl,
deny /sbin/** wl,
deny /srv/** wl,
deny /tmp/** wl,
deny /var/** wl,
# Allow nginx-specific paths
/usr/sbin/nginx ix,
/usr/share/nginx/** r,
/var/log/nginx/** w,
/var/cache/nginx/** w,
/var/run/nginx.pid w,
/etc/nginx/** r,
# Deny shell access
deny /bin/dash mrwklx,
deny /bin/sh mrwklx,
deny /usr/bin/top mrwklx,
# Capabilities
capability chown,
capability dac_override,
capability setuid,
capability setgid,
capability net_bind_service,
# Deny dangerous capabilities
deny capability dac_read_search,
deny capability sys_admin,
deny capability sys_module,
deny capability sys_ptrace,
# Proc filesystem restrictions
deny @{PROC}/* w,
deny @{PROC}/sys/[^k]** w,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny @{PROC}/kcore rwklx,
deny mount,
}
Load and apply the profile:
sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx
sudo aa-status | grep docker-nginx
Run containers with the custom profile:
docker run -d \
--name secure-nginx \
--security-opt "apparmor=docker-nginx" \
-p 80:80 \
nginx:alpine
To debug AppArmor denials, monitor the kernel logs:
sudo dmesg | grep -i apparmor
sudo aa-logprof
Advanced Runtime Security with Seccomp Profiles
Seccomp (secure computing mode) filters system calls at the kernel level, providing fine-grained control over what containers can do. Docker's default seccomp profile blocks about 44 dangerous system calls, but for high-security environments, you might need custom profiles.
Create a restrictive seccomp profile for a web application at /etc/docker/seccomp/webapp.json:
{
"defaultAction": "SCMP_ACT_ERRNO",
"defaultErrnoRet": 1,
"architectures": [
"SCMP_ARCH_X86_64",
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
],
"syscalls": [
{
"names": [
"accept", "accept4", "access", "arch_prctl", "bind", "brk",
"clock_gettime", "clone", "close", "connect", "dup", "dup2",
"epoll_create", "epoll_create1", "epoll_ctl", "epoll_pwait",
"epoll_wait", "execve", "exit", "exit_group", "fchmod", "fchown",
"fcntl", "fstat", "fstatfs", "futex", "getcwd", "getdents",
"getdents64", "getegid", "geteuid", "getgid", "getpgrp",
"getpid", "getppid", "getpriority", "getrandom", "getresgid",
"getresuid", "getrlimit", "getsockname", "getsockopt", "gettid",
"gettimeofday", "getuid", "ioctl", "kill", "listen", "lseek",
"madvise", "memfd_create", "mincore", "mmap", "mprotect",
"mremap", "munmap", "nanosleep", "newfstatat", "open", "openat",
"pipe", "pipe2", "poll", "ppoll", "prctl", "pread64", "preadv",
"preadv2", "prlimit64", "pselect6", "pwrite64", "pwritev",
"pwritev2", "read", "readlink", "readlinkat", "readv", "recvfrom",
"recvmmsg", "recvmsg", "rename", "renameat", "rt_sigaction",
"rt_sigpending", "rt_sigprocmask", "rt_sigqueueinfo", "rt_sigreturn",
"rt_sigsuspend", "rt_sigtimedwait", "sched_getaffinity",
"sched_setaffinity", "sched_yield", "select", "sendfile",
"sendmmsg", "sendmsg", "sendto", "set_robust_list", "set_tid_address",
"setgid", "setgroups", "setitimer", "setpgid", "setpriority",
"setresgid", "setresuid", "setsid", "setsockopt", "setuid",
"shutdown", "sigaltstack", "socket", "socketpair", "stat",
"statfs", "statx", "symlink", "symlinkat", "sysinfo", "tgkill",
"time", "times", "umask", "uname", "unlink", "unlinkat",
"wait4", "waitid", "write", "writev"
],
"action": "SCMP_ACT_ALLOW"
}
]
}
Apply the custom seccomp profile along with other security options:
docker run -d \
--name secure-app \
--cap-drop ALL \
--cap-add NET_BIND_SERVICE \
--security-opt no-new-privileges:true \
--security-opt seccomp=/etc/docker/seccomp/webapp.json \
--security-opt apparmor=docker-default \
--read-only \
--tmpfs /tmp:noexec,nosuid,size=10m \
--tmpfs /var/run:noexec,nosuid,size=10m \
myapp:latest
Container Resource Limits and Isolation
Resource limits aren't just about preventing one container from monopolizing system resources; they're a critical security control that can prevent denial-of-service attacks and limit the impact of compromised containers. I've learned to set aggressive limits and adjust them based on actual usage patterns.
docker run -d \
--name limited-app \
--memory="512m" \
--memory-swap="512m" \
--memory-reservation="256m" \
--cpus="0.5" \
--cpu-shares="512" \
--pids-limit="100" \
--ulimit nofile=1024:2048 \
--ulimit nproc=50:100 \
myapp:latest
For production deployments, use Docker Compose to define resource limits consistently:
version: '3.8'
services:
app:
image: myapp:latest
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
pids: 100
reservations:
cpus: '0.25'
memory: 256M
ulimits:
nofile:
soft: 1024
hard: 2048
nproc:
soft: 50
hard: 100
security_opt:
- no-new-privileges:true
- apparmor=docker-default
- seccomp=builtin
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp:noexec,nosuid,size=10m
Continuous Security Monitoring and Compliance
Security isn't a one-time configuration; it requires continuous monitoring and adjustment. I've implemented automated security checks that run continuously and alert on policy violations. Create a comprehensive monitoring script at /usr/local/bin/docker-security-monitor.sh:
#!/bin/bash
# Docker Security Monitoring Script
LOG_FILE="/var/log/docker-security.log"
ALERT_EMAIL="security@example.com"
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
}
check_privileged_containers() {
PRIVILEGED=$(docker ps --format '{{.Names}}' --filter 'status=running' | \
xargs -I {} docker inspect {} --format '{{.Name}} {{.HostConfig.Privileged}}' | \
grep true)
if [ ! -z "$PRIVILEGED" ]; then
log_message "ALERT: Privileged containers detected: $PRIVILEGED"
return 1
fi
return 0
}
check_docker_socket_mount() {
SOCKET_MOUNTS=$(docker ps --format '{{.Names}}' | \
xargs -I {} docker inspect {} --format '{{.Name}} {{range .Mounts}}{{.Source}}{{end}}' | \
grep /var/run/docker.sock)
if [ ! -z "$SOCKET_MOUNTS" ]; then
log_message "CRITICAL: Docker socket mounted in containers: $SOCKET_MOUNTS"
return 1
fi
return 0
}
check_root_containers() {
ROOT_CONTAINERS=$(docker ps --format '{{.Names}}' | \
xargs -I {} docker inspect {} --format '{{.Name}} {{.Config.User}}' | \
grep -E ':\s*$' | cut -d' ' -f1)
if [ ! -z "$ROOT_CONTAINERS" ]; then
log_message "WARNING: Containers running as root: $ROOT_CONTAINERS"
return 1
fi
return 0
}
check_resource_limits() {
NO_LIMITS=$(docker ps --format '{{.Names}}' | \
xargs -I {} docker inspect {} --format '{{.Name}} {{.HostConfig.Memory}}' | \
grep ' 0$' | cut -d' ' -f1)
if [ ! -z "$NO_LIMITS" ]; then
log_message "WARNING: Containers without memory limits: $NO_LIMITS"
return 1
fi
return 0
}
scan_images() {
for image in $(docker images --format "{{.Repository}}:{{.Tag}}" | grep -v '<none>'); do
log_message "Scanning image: $image"
trivy image --severity CRITICAL,HIGH --quiet "$image" >> "$LOG_FILE" 2>&1
if [ $? -ne 0 ]; then
log_message "WARNING: Vulnerabilities found in $image"
fi
done
}
# Main monitoring loop
main() {
log_message "Starting Docker security monitoring"
check_privileged_containers
check_docker_socket_mount
check_root_containers
check_resource_limits
scan_images
# Check Falco status
if ! systemctl is-active --quiet falco; then
log_message "CRITICAL: Falco is not running!"
systemctl start falco
fi
# Verify AppArmor profiles
if ! aa-status | grep -q docker-default; then
log_message "WARNING: AppArmor docker-default profile not loaded"
fi
log_message "Security monitoring completed"
}
main
Schedule this script to run every hour using cron:
sudo chmod +x /usr/local/bin/docker-security-monitor.sh
echo "0 * * * * root /usr/local/bin/docker-security-monitor.sh" | sudo tee /etc/cron.d/docker-security
Integrating Security into CI/CD Pipelines
Security must be built into your deployment pipeline from the start. Here's a complete GitHub Actions workflow that implements comprehensive security checks:
name: Secure Container Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-scan:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
security-events: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
- name: Build image
uses: docker/build-push-action@v5
with:
context: .
load: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
- name: Run Grype vulnerability scanner
uses: anchore/scan-action@v3
id: grype
with:
image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
fail-build: false
severity-cutoff: high
- name: Docker Scout Analysis
uses: docker/scout-action@v1
with:
command: cves
image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
only-severities: critical,high
exit-code: true
- name: Generate SBOM
uses: docker/scout-action@v1
with:
command: sbom
image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
format: spdx-json
output: sbom.json
- name: Sign image with Cosign
uses: sigstore/cosign-installer@v3
- name: Sign container image
run: |
cosign sign --yes ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}@${{ steps.build.outputs.digest }}
env:
COSIGN_EXPERIMENTAL: 1
- name: Push image if secure
if: success()
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Troubleshooting Common Security Issues
Over the years, I've encountered numerous security-related issues that can be frustrating to debug. Let me share solutions to the most common problems you'll face when implementing these security measures.
When containers can't start after enabling user namespace remapping, it's usually because of permission issues with existing volumes. The solution involves resetting the ownership of Docker's data directory:
sudo systemctl stop docker
sudo chown -R dockremap:dockremap /var/lib/docker
sudo systemctl start docker
If containers lose network connectivity after implementing UFW rules, the issue is typically with Docker's iptables manipulation conflicting with UFW. Verify the DOCKER-USER chain exists and contains your rules:
sudo iptables -L DOCKER-USER -n -v
sudo ufw-docker status
AppArmor denials can be silent killers that prevent containers from functioning properly. When containers behave unexpectedly, always check the kernel logs for AppArmor denials:
sudo dmesg | grep -i apparmor | tail -20
sudo aa-complain /etc/apparmor.d/containers/docker-nginx # Temporary for debugging
sudo aa-enforce /etc/apparmor.d/containers/docker-nginx # Re-enable after fixing
For Vault integration issues, the most common problem is token expiration or incorrect policy permissions. Always verify the token validity and policy attachment:
vault token lookup
vault token capabilities secret/data/containers/myapp
When Falco generates too many false positives, tune the rules by adjusting conditions and priorities. Start with higher priority thresholds and gradually lower them as you understand your environment's normal behavior:
- rule: Custom Rule with Exceptions
desc: Alert on suspicious behavior except for known good processes
condition: >
spawned_process and container and
proc.name in (suspicious_commands) and
not proc.pname in (trusted_parents) and
not container.image.repository in (trusted_images)
priority: WARNING
Performance Considerations and Optimization
Security measures do impact performance, but with proper tuning, the overhead is minimal and well worth the protection. User namespace remapping adds about 2-3% overhead, while seccomp filtering typically adds less than 1%. The biggest performance impact comes from vulnerability scanning, which is why I recommend scanning during build time rather than runtime.
To optimize scanning performance, maintain a local vulnerability database cache:
# Pre-download Trivy database
trivy image --download-db-only
# Cache Grype database
grype db update
# Schedule regular updates
echo "0 2 * * * root trivy image --download-db-only && grype db update" | sudo tee /etc/cron.d/vuln-db-update
For Falco, use eBPF mode instead of the kernel module for better performance:
docker run -d --name falco \
--privileged \
-v /var/run/docker.sock:/host/var/run/docker.sock \
-v /proc:/host/proc:ro \
-e FALCO_BPF_PROBE="" \
falcosecurity/falco:latest
Conclusion and Best Practices Summary
Container security on Ubuntu requires a comprehensive, layered approach that addresses threats at every level of the stack. The configurations and practices I've shared here come from years of securing production container deployments and learning from both successes and failures.
The most critical security measures you should implement immediately include enabling user namespace remapping in the Docker daemon, deploying Falco for runtime threat detection, implementing comprehensive vulnerability scanning in your CI/CD pipeline, using multi-stage builds with distroless base images, and properly managing secrets with Vault or similar tools. These five measures alone will prevent the vast majority of container security incidents.
Remember that security is an ongoing process, not a destination. Regularly update your base images and scanning tools, as new vulnerabilities are discovered daily. Monitor your containers continuously and investigate any anomalies promptly. Most importantly, make security everyone's responsibility by integrating these practices into your development workflow rather than treating them as an afterthought.
The container security landscape evolves rapidly, with new tools and threats emerging constantly. Stay informed about the latest CVEs affecting container runtimes, particularly those with high CVSS scores like the recent Docker Desktop escapes and NVIDIA container toolkit vulnerabilities. Join security communities, follow security researchers, and regularly audit your configurations against benchmarks like CIS Docker Benchmark.
Finally, remember that perfect security doesn't exist, but with the comprehensive approach outlined in this guide, you can achieve a security posture that makes your containers an extremely hard target while maintaining the agility and efficiency that makes containers valuable in the first place.