Forget Hetzner—while I’ve been a satisfied Hetzner user since 2010 due to their competitive offerings, sometimes we’re looking for cost savings and code adventures, aren’t we?
But remember, adventures can be risky, even those related to software engineering.
By following this guide and its precautions, you can host your Ruby on Rails app on your Synology NAS for free, complete with a custom domain and SSL encryption.
Disclaimer: By following any of the steps below, you do so at your own risk. Exposing your NAS to the internet is not without risk, even when using a Cloudflare tunnel. Potential risks are discussed at the end of this article.
Please keep in mind this is a PoC (Proof of Concept), so production deployment best practices have not been fully respected or considered. Treat this as a kickstart in your research and setup to improve upon, securing your containerized Ruby on Rails application and your network.
Table of Contents
Synology NAS with Cloudflare Tunnel
Since Starlink is my internet provider and uses CGNAT (Carrier-grade NAT), I can’t use my router-exposed ports to access my apps from the internet. Cloudflare tunnels to the rescue! Not only do they work, but we’ll also get a custom (sub)domain with SSL!
Adventures with technology come with challenges, though. One issue is that the subdomain changes if we restart one of our Docker services (I’ve got a solution for that, too!). Another concern is security: Cloudflare terminates SSL encryption at the tunnel entrance, and there’s a serious security risk when using any tunnel.
Prerequisites - Preparation steps
We’ll be deploying the app on a Synology NAS with Container Manager installed and running. I’ve upgraded the Container Manager Synology package to the latest beta to run Coolify, but it should work with the stock version of Container Manager.

I set up all Docker deployments as Projects because I find it easier to manage than using CLI commands.
Enable SSH access
We’ll also be using SSH access to the NAS to sync our app files. Ensure you’ve got it enabled and can SSH into your NAS:
- Go to the terminal setting page on your Synology device:
- Synology NAS: DSM Control Panel > Terminal & SNMP > Terminal
- Synology Router: SRM Control Panel > Services > System Services > Terminal
- Tick Enable SSH service.
- Specify a port number for SSH connections and save the settings. To ensure system security, replacing the default port 22 with another one is recommended.
For more details, visit this guide.
Web station
I also have the Web Station running, and I will be using it, but I’m sure it can be done without it. More information can be found here.
Development environment
I’m using VSCode with our custom Ruby & Rails extension,vsc-ruby-rails, which makes it easier to run Ruby and Rails tasks.


rails new SynoRoR --devcontainer -d postgresql -c tailwind


# config/database.yml
production:
primary: &primary_production
<<: *default
database: syno_ro_r_production
url: <%= ENV["DATABASE_URL"] %>
# config/routes.rb
Rails.application.routes.draw do
root "application#index"
end
# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
# Only allow modern browsers supporting webp images, web push, badges, import maps, CSS nesting, and CSS :has.
allow_browser versions: :modern
def index
end
end
# app/views/application/index.html.erb
<div class="flex flex-col items-center justify-center bg-red-100 shadow-lg p-32">
<h1 class="text-4xl font-bold text-red-600 mb-4">Welcome to Syno RoR</h1>
<p class="text-lg text-blue-500 mb-8">The true self-hosted app.</p>
</div>




Docker Preparations on Synology NAS
Create Docker project directory
I keep all Docker project folders inside /volume1/docker/, so it would be /volume1/docker/syno-ror.
Create the following folders inside syno-ror:
- rails_app
- cloudflared
Set permissions for the folders
We’ll be using the default Dockerfile generated by Ruby on Rails 8 without modifying it. To ensure everything works smoothly, give ContainerManager full control on top of its Read & Write permissions. Remember to Apply to this folder, sub-folders and files.



cd SynoRoR
rsync -avz --progress -e ssh --rsync-path=/bin/rsync --exclude={".git","tmp","storage","test","log"} . user@your_nas_ip:/volume1/docker/syno-ror/rails_app
ssh -i ~/.ssh/id_rsa user@your_nas_ip 'mkdir -p /volume1/docker/syno-ror/rails_app/{tmp,log,storage}'

ssh -i ~/.ssh/id_rsa user@your_nas_ip "echo 'RAILS_MASTER_KEY=$(cat config/master.key)' > /volume1/docker/syno-ror/.env"
# Specify the version of the Docker Compose file format
version: '3.8'
services:
# Define the Rails application service
rails-app:
env_file:
- /volume1/docker/syno-ror/.env
# Build the Docker image for the Rails app
build:
context: rails_app # Specify the build context directory
dockerfile: Dockerfile # Specify the Dockerfile to use for building the image
ports:
- "3003:3000" # Map port 3003 on the host to port 3000 in the container
environment:
RAILS_ENV: production # Set the Rails environment to production
RAILS_MASTER_KEY: ${RAILS_MASTER_KEY} # Set the Rails master key from an environment variable
DATABASE_URL: postgres://postgres:postgres@postgres:5432/syno_ror_production # Set the database URL for the Rails app
depends_on:
- postgres # Ensure the postgres service is started before the Rails app
restart: unless-stopped # Restart the container unless it is explicitly stopped
volumes:
- /volume1/docker/syno-ror/rails_app:/rails # Mount the host directory to the container's /rails directory
# Define the PostgreSQL database service
postgres:
image: postgres:16.1 # Use the postgres image version 16.1
restart: unless-stopped # Restart the container unless it is explicitly stopped
environment:
POSTGRES_USER: postgres # Set the PostgreSQL user
POSTGRES_PASSWORD: postgres # Set the PostgreSQL password
POSTGRES_DB: syno_ror_production # Set the PostgreSQL database name
volumes:
- postgres-data:/var/lib/postgresql/data # Mount the named volume to the container's data directory
# Define the Cloudflared service
cloudflared:
image: cloudflare/cloudflared # Use the cloudflare/cloudflared image
restart: unless-stopped # Restart the container unless it is explicitly stopped
command: tunnel --url http://192.168.0.230:84 # Run the tunnel command with the specified URL
volumes:
- /volume1/docker/syno-ror/cloudflared:/etc/cloudflare # Mount the host directory to the container's /etc/cloudflare directory
depends_on:
- rails-app # Ensure the rails-app service is started before the Cloudflared service
# Define named volumes
volumes:
postgres-data: # Named volume for PostgreSQL data








rails assets:precompile




ssh -i ~/.ssh/id_rsa user@your_nas_ip 'touch /volume1/docker/syno-ror/rails_app/tmp/restart.txt'
#!/bin/bash
# Configuration
URLS_FILE="/volume1/docker/cloudflared-urls.txt"
NEW_URLS_ADDED=0 # Flag to track if any new URLs are added
# Find all containers with 'cloudflared' in their name
CONTAINERS=$(docker ps --filter "name=cloudflared" --format "{{.Names}}")
# Iterate over each container
for CONTAINER_NAME in $CONTAINERS; do
# Define the file to store the last URL for this container
LAST_URL_FILE="/volume1/docker/${CONTAINER_NAME}-last_url.txt"
# Fetch the latest logs from the container
LOGS=$(docker logs "$CONTAINER_NAME" 2>&1) # Redirects stderr (2) to stdout (1)
# Extract the URL from the logs using a regular expression
URL=$(echo "$LOGS" | grep -Eo 'https?://[a-zA-Z0-9.-]+\.trycloudflare\.com' | tail -n 1)
# Ensure we have a URL
if [ -n "$URL" ]; then
# Read the last URL sent to avoid duplicate entries
if [ -f "$LAST_URL_FILE" ]; then
LAST_URL=$(cat "$LAST_URL_FILE")
else
LAST_URL=""
fi
# Check if the URL is new
if [ "$URL" != "$LAST_URL" ]; then
# Update the last URL file
echo "$URL" > "$LAST_URL_FILE"
# Check if the URL is already in the URLs file
if ! grep -Fq "URL: $URL" "$URLS_FILE"; then
# Append to URLs file with desired formatting
echo -e "Container: $CONTAINER_NAME\nURL: $URL\n" >> "$URLS_FILE"
echo "Added new URL for $CONTAINER_NAME to $URLS_FILE."
NEW_URLS_ADDED=1 # Set the flag as a new URL has been added
else
echo "URL for $CONTAINER_NAME already exists in $URLS_FILE. Skipping."
fi
else
echo "No new URL found for $CONTAINER_NAME. The URL has already been recorded."
fi
else
echo "No URL found in the logs for $CONTAINER_NAME."
fi
done
# Check if any new URLs were added before sending the email
if [ "$NEW_URLS_ADDED" -eq 1 ]; then
# Prepare the email body by processing the URLs file
EMAIL_BODY=$(awk -v RS= '' '{gsub(/ - /, "\nURL: "); print $0 "\n"}' "$URLS_FILE")
# Send the email with all URLs
curl "https://api.postmarkapp.com/email" \
-X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "X-Postmark-Server-Token: $POSTMARK_TOKEN" \
-d "{
\"From\": \"$FROM_EMAIL\",
\"To\": \"$TO_EMAIL\",
\"Subject\": \"Cloudflared URLs\",
\"TextBody\": \"$EMAIL_BODY\"
}"
echo "Email sent with all URLs from $URLS_FILE."
else
echo "No new URLs detected. Email not sent."
fi
TO_EMAIL="" FROM_EMAIL="" POSTMARK_TOKEN="" bash /volume1/docker/cloudflared_url_log_note.sh



Potential Risks and Mitigation Strategies
While using Cloudflared and Cloudflare’s network can enhance security by hiding your origin IP and providing DDoS protection, there are still potential vulnerabilities to consider:
Potential risks
Vulnerabilities in the NAS operating system
- Unpatched software: Ensure your Synology DSM is regularly updated to patch known vulnerabilities.
- Privilege escalation: Be cautious of limited access that could exploit OS vulnerabilities.
Web server vulnerabilities
- Server software exploits: Keep your web server software updated to prevent exploitation.
- Misconfigurations: Double-check settings to avoid exposing sensitive files or directories.
Cloudflared tunnel risks
- Tunnel misconfiguration: Ensure only intended services are exposed through the tunnel.
- Credential compromise: Secure authentication tokens or keys used for the tunnel.
Exposure of internal network
- Lateral movement: Isolate the NAS from other critical network segments.
- Network scanning: Implement network segmentation to limit exposure.
Data leakage
- Sensitive information exposure: Avoid accidentally including sensitive data in your site.
- Error messages: Configure error handling to prevent revealing system information.
Denial of Service (DoS) attacks
- Resource exhaustion: Monitor for unusual activity that could overwhelm resources.
Third-party dependency risks
- Cloudflare service disruption: Be prepared for potential service outages.
- Privacy concerns: Be aware that your data traffic passes through Cloudflare’s network.
SSL/TLS considerations
- Improper SSL configuration: Ensure end-to-end encryption between clients, Cloudflare, and your NAS.
- Certificate management: Regularly update and manage SSL certificates.
Authentication bypass
- Overexposed services: Limit exposure to only necessary services.
Compliance and legal risks
- Data protection laws: Ensure compliance with relevant regulations like GDPR.
User account security
- Weak passwords: Use strong, unique passwords and consider multi-factor authentication.
- Default accounts: Disable or secure default accounts.
Logging and monitoring
- Insufficient monitoring: Implement proper logging and monitoring.
- Log exposure: Secure logs to prevent information leakage.
Physical security
- Local access: Secure the physical hardware to prevent unauthorized access.
Firmware vulnerabilities
- Outdated firmware: Keep NAS hardware firmware updated.
Mitigation strategies
- Keep software updated: Regularly update all software components.
- Secure configurations: Review and secure all configurations.
- Strong authentication: Implement strong passwords and multi-factor authentication.
- Secure tunnel credentials: Protect and periodically rotate authentication tokens and keys.
- Implement network segmentation: Isolate your NAS from other network devices.
- Use proper SSL/TLS encryption: Ensure secure communication channels.
- Regular backups and recovery plan: Maintain backups and have a response plan.
- Monitor and log activity: Set up monitoring and alerts for suspicious activities.
- Firewall rules: Restrict traffic to only what’s necessary.
Happy coding! 🙇🏻♂️