Our nine-server HA WordPress cluster was fast, redundant, and self-healing — but every database query was crossing the public internet. MariaDB replication between Sydney, Brisbane, and Melbourne. HyperDB read/write splitting. Page view counter increments on every visit. All of it traversing the open WAN. We moved the entire stack into a BinaryLane cross-region VPC, migrated all database traffic to private IPs, and removed public IP addresses from all three database servers entirely. Here is why, how, and what it means for security.
The Problem: Database Traffic on the Public Internet
When we built the HA cluster, all nine servers — six web nodes and three database nodes — communicated over public IP addresses. This was the path of least resistance at the time, and it worked. But it meant:
- Every page load triggered a database query from a web server to a DB server over the public internet
- MariaDB replication between the primary in Sydney and replicas in Brisbane and Melbourne was flowing over the WAN
- Three database servers had public IP addresses and were listening on port 3306 — reachable by anyone who knew the address
- Firewall rules restricted access, but a misconfiguration or zero-day could still expose the database directly to the internet
This is a common pattern in cloud deployments, and firewalls do their job. But there is a better way: make the database servers unreachable from the internet entirely.
What Is a VPC and Why It Matters
A Virtual Private Cloud (VPC) creates an isolated private network for your servers. On BinaryLane, VPCs have a powerful feature: they are cross-region. A single VPC can span Sydney, Melbourne, Brisbane, and Perth — giving all your servers a private IP address on a shared network, regardless of which data centre they sit in.
We created a VPC called wp-ha-cluster with the IP range 10.241.0.0/16. This gave us 65,000 private addresses to work with across all three cities. Each server received a second network interface — a private one — alongside its existing public interface. The key insight is that servers do not need a public IP to function. If a server only needs to communicate with other servers in the VPC, the public interface can be removed entirely.
Database servers are a perfect candidate. They talk to web servers (queries) and to each other (replication) — all of which can happen over the VPC. They have no reason to be on the internet.
The Migration: Before and After
| Server | Role | Region | Before (Public IP) | After (VPC IP) | Public IP |
|---|---|---|---|---|---|
| wp-db-primary | DB Primary | SYD | 45.124.54.112 | 10.241.1.1 | Removed |
| wp-db-replica | DB Replica | BNE | 110.232.112.239 | 10.241.1.2 | Removed |
| wp-db-replica-mel | DB Replica | MEL | 103.230.156.210 | 10.241.1.3 | Removed |
| wp-web-1-syd | Web | SYD | 150.107.74.199 | 10.241.2.1 | Kept |
| wp-web-2-syd | Web | SYD | 103.16.129.22 | 10.241.2.2 | Kept |
| wp-web-3-bne | Web | BNE | 45.124.55.49 | 10.241.2.3 | Kept |
| wp-web-4-bne | Web | BNE | 103.4.235.27 | 10.241.2.4 | Kept |
| wp-web-5-mel | Web | MEL | 43.229.61.179 | 10.241.2.5 | Kept |
| wp-web-6-mel | Web | MEL | 175.45.183.21 | 10.241.2.6 | Kept |
For a detailed view of the complete cluster topology — all nine servers, VPC addressing, replication flows, and HyperDB read/write splitting — see the full architecture diagram.
Zero Downtime
Adding a VPC interface is non-disruptive — it creates a second network adapter alongside the existing public one. We migrated services to private IPs one at a time, verified each change, and only removed public IPs from the database servers once everything was confirmed working. The site never went down during the migration.
Step by Step: What We Changed
1. Created the VPC and Assigned Private IPs
We created the wp-ha-cluster VPC on BinaryLane and moved all nine servers into it. Each server received a predictable private IP — database servers in the 10.241.1.x range, web servers in the 10.241.2.x range. We verified cross-region connectivity by pinging every server from every other server — Sydney to Brisbane, Brisbane to Melbourne, Melbourne to Sydney — all over the VPC.
2. Switched MariaDB Replication to Private IPs
The two read replicas in Brisbane and Melbourne were replicating from the primary in Sydney over public IPs. We pointed them at the primary’s new VPC address:
STOP REPLICA;
CHANGE MASTER TO MASTER_HOST='10.241.1.1';
START REPLICA;
Replication resumed immediately. The binary log position was preserved — no data loss, no resync needed. We confirmed with SHOW REPLICA STATUS that both replicas were caught up with zero lag.
3. Updated HyperDB Configs on All 6 Web Servers
HyperDB handles read/write splitting — writes go to the primary, reads go to the nearest replica. Each web server had a db-config.php pointing at public IPs. We swapped every public IP to its VPC equivalent while preserving the region-local read priority:
- Sydney web servers — read from primary (10.241.1.1) first, then BNE replica, then MEL replica
- Brisbane web servers — read from BNE replica (10.241.1.2) first, then primary, then MEL replica
- Melbourne web servers — read from MEL replica (10.241.1.3) first, then primary, then BNE replica
All writes go to 10.241.1.1 (the primary) regardless of which web server handles the request.
4. Removed Public IPs from All 3 Database Servers
Once replication and HyperDB were confirmed working over private IPs, we removed the public IP addresses from all three database servers. This was the critical step — the moment the database layer became invisible from the internet.
Important: Once you remove a server’s public IP, you can no longer SSH to it directly. You need a jump host or VNC console access. We set this up before removing the IPs — plan your access path first.
5. Set Up SSH ProxyJump for Database Management
With no public IPs, the database servers are only reachable through the VPC. We configured SSH ProxyJump so that each DB server is accessed through a web server in its region:
wp-db-primary(10.241.1.1) — via wp-web-1-sydwp-db-replica(10.241.1.2) — via wp-web-3-bnewp-db-replica-mel(10.241.1.3) — via wp-web-5-mel
From the management machine, connecting to a database server now automatically tunnels through the regional web server. BinaryLane’s VNC console provides emergency access if the jump host is unavailable.
The Security Win
This is the whole point. Here is what changed:
| Metric | Before VPC | After VPC |
|---|---|---|
| DB servers with public IPs | 3 | 0 |
| Port 3306 reachable from internet | Yes (firewall-filtered) | No port exists |
| DB traffic encryption required | Yes (over WAN) | Optional (private network) |
| Attack surface for DB layer | 3 public IPs | None |
| Network path for DB queries | Public internet | VPC private network |
| Firewall as sole DB protection | Yes | No — no interface to reach |
You Cannot Hack What You Cannot Reach
A firewall protects a port. Removing the public IP removes the port entirely. There is no IP address to scan, no port to probe, no firewall rule to misconfigure. The database servers exist only on the private VPC network — they are not part of the internet at all. This is the strongest form of network security: absence.
Performance Bonus
Security was the primary motivation, but the VPC migration also improved performance. VPC traffic between BinaryLane regions travels over their internal backbone rather than the public internet. This means:
- Lower latency for cross-region database queries — Brisbane web servers querying the Sydney primary see reduced round-trip times
- More consistent performance — private network traffic avoids public internet congestion and routing variability
- Page view counter writes — the view counter we deployed increments a database counter on every page load, and those writes now traverse the VPC instead of the WAN
Replication between Sydney, Brisbane, and Melbourne runs noticeably smoother. The replicas stay caught up with near-zero lag consistently, rather than occasionally spiking during public internet congestion.
AI-Managed Infrastructure
This entire VPC migration — from planning through execution — was performed by Claude using two open-source MCP (Model Context Protocol) servers:
- BinaryLane MCP — Claude created the VPC, moved servers into it, assigned private IPs, and removed public IPs from the database servers. Every infrastructure change was made programmatically through the BinaryLane API.
- SSH MCP — Claude connected to each server, updated MariaDB replication targets, rewrote HyperDB configs, configured SSH ProxyJump access, and verified every step. All server-side work happened over SSH without a human touching a terminal.
Claude made every decision during the migration — which IPs to assign, what order to migrate services, when it was safe to remove public IPs, and how to verify each step. The migration was completed in a single session with zero downtime.
This is the same approach we use for all infrastructure on this site. Claude is not suggesting commands for us to run — it is the operator, executing changes directly and adapting based on what it finds. The VPC migration is a good example of why this works: it involved coordinating changes across nine servers in three cities, with dependencies between steps and verification at each stage. Exactly the kind of operational work that benefits from an AI operator that can reason about the full picture.
💡 Try It Yourself
Both MCP servers are open-source. If you have a BinaryLane account, you can give Claude the same infrastructure capabilities:
- BinaryLane MCP: github.com/termau/binarylane-mcp
- SSH MCP: github.com/termau/ssh-mcp
Install them, point Claude at your infrastructure, and start a conversation.