Building a Multi-City High Availability WordPress Stack That Survives Disasters

This WordPress site isn’t running on a single server. It’s distributed across four web servers and two database nodes spanning two Australian cities — Sydney and Brisbane. If an entire data centre goes offline, your browser simply connects to the surviving region. No manual intervention. No downtime. Here’s how we built it.

The Architecture at a Glance

Every request to wp.adamhomenet.com is resolved via DNS round-robin to one of four web servers. Two sit in BinaryLane’s Sydney data centre, two in Brisbane. Each server runs an identical LAMP stack — Nginx, PHP 8.3 with OPcache, and a shared WordPress codebase. Behind them, a MariaDB primary in Sydney replicates asynchronously to a read-ready replica in Brisbane.

🏗️ Server Inventory

ServerRoleRegionSpecs
wp-web-1-sydWeb / PHPSydney1 vCPU, 1GB RAM
wp-web-2-sydWeb / PHPSydney1 vCPU, 1GB RAM
wp-web-3-bneWeb / PHPBrisbane1 vCPU, 1GB RAM
wp-web-4-bneWeb / PHPBrisbane1 vCPU, 1GB RAM
wp-db-primaryMariaDB PrimarySydney1 vCPU, 2GB RAM
wp-db-replicaMariaDB ReplicaBrisbane1 vCPU, 2GB RAM

No Dependence on Any One City

This is the core design principle: no single Australian city is a single point of failure. Traditional hosting puts everything in one data centre. If that facility loses power, connectivity, or suffers a natural disaster — your site goes dark. We took a fundamentally different approach.

Our DNS configuration publishes four A records for wp.adamhomenet.com, two pointing to Sydney servers and two pointing to Brisbane. When a client resolves the domain, they receive all four IPs. If Sydney becomes unreachable, browsers automatically fall back to the Brisbane addresses. The site keeps serving.

🌏 Multi-Region Resilience

Sydney and Brisbane are separated by approximately 730 kilometres. They sit on different power grids, different network backbones, and different geographic risk profiles. A cyclone hitting Brisbane won’t affect Sydney. A power grid failure in NSW won’t touch Queensland. This geographic separation is our strongest defence against regional outages.

What Can Fail (And We Stay Online)

The architecture is designed to survive at least two simultaneous component failures. Here are real scenarios we’ve tested:

Failure ScenarioWhat Goes DownResult
Two web servers in different regionsweb-1-syd + web-3-bne✅ Each region still has one healthy server
Both web servers in one regionweb-1-syd + web-2-syd✅ Brisbane serves all traffic seamlessly
A web server + the DB replicaweb-2-syd + db-replica✅ Three web servers + primary DB continue
Entire Sydney data centreAll Sydney servers✅ Brisbane web servers + DB replica (after promotion)
Entire Brisbane data centreAll Brisbane servers✅ Sydney web servers + primary DB continue

The only scenario that causes full downtime is losing both database servers simultaneously — a primary in Sydney and its replica in Brisbane going down at the exact same time. Even then, automated backups mean we can restore within 15–30 minutes.

Region Failure Simulation — Watch Brisbane Fail and Recover
USER SYDNEY Primary web-1-syd Nginx+PHP web-2-syd Nginx+PHP db-primary MariaDB REPLICATION BRISBANE Secondary web-3-bne Nginx+PHP web-4-bne Nginx+PHP db-replica MariaDB BRISBANE REGION OFFLINE BRISBANE RESTORED Sydney absorbs all traffic automatically — zero downtime for users

Live Migration: Move Running Servers Between Cities — Zero Downtime

Here’s where BinaryLane’s infrastructure truly sets this stack apart. Most cloud providers let you spin up servers in different regions, but BinaryLane offers something far more powerful: live migration of running VPS instances between Australian cities.

BinaryLane operates data centres in six regions across five Australian cities and Singapore: Sydney (NSW), Melbourne (VIC), Brisbane (QLD), Adelaide (SA), Perth (WA), and Singapore. Any server in our stack can be live-migrated between these regions while it’s still running — no shutdown, no rebuild, no data loss.

🚀 How Live Migration Works

When you trigger a region change in the BinaryLane panel, the platform doesn’t destroy and recreate your server. It live-migrates the running VPS — memory, disk, state, everything — to a host in the target city. During the transfer, incoming traffic is routed via anycast to the user’s closest point of presence, then carried across BinaryLane’s secure private backbone to reach the server wherever it currently sits. The actual cutover drops approximately 2–3 packets. The user doesn’t notice. The server doesn’t restart.

Live Migration — Server Moves Between Cities, IP Stays the Same
BINARYLANE SECURE BACKBONE (ANYCAST) BRISBANE QLD MELBOURNE VIC web-3-bne Running IP ADDRESS UNCHANGED USER PACKETS DROPPED: 2-3 Migrating live over backbone…

🌏 Act of God? We Move the Entire Stack — Live.

Imagine a catastrophic event threatens the Brisbane region — a severe cyclone, prolonged flooding, or critical infrastructure damage. We don’t need to panic, rebuild, or restore from backups. We simply live-migrate our Brisbane servers to Melbourne, Adelaide, or Perth. Each server moves individually, while still running, with its complete disk, all its data, and its entire configuration intact.

Here’s what a full regional evacuation looks like for our Brisbane servers:

Step 1: Live-migrate web servers — Select each Brisbane web server in the BinaryLane panel and choose a target region (Melbourne, Adelaide, or Perth). The VPS is transferred live across the backbone. The server keeps running throughout. With two Brisbane web servers, we can even migrate them to different replacement cities for additional geographic spread.

Step 2: Live-migrate the database replica — The Brisbane MariaDB replica is migrated the same way. It arrives in the new city with its complete dataset, replication configuration, and replication thread intact. It reconnects to the Sydney primary and resumes replicating — no CHANGE MASTER TO required, no data re-seeding, no manual intervention.

And that’s it. There is no Step 3. Because BinaryLane uses anycast routing, each server’s IP address remains the same regardless of which city it’s in. Traffic is routed across the backbone to wherever the server currently lives. No DNS changes required. No TTL propagation delays. The A records for wp.adamhomenet.com don’t change — they still point to the same IPs, which now happen to be served from a different city.

🎯 The Result

Downtime: effectively zero. Dropped packets: 2–3 per server. Data lost: none. The entire LAMP stack — web servers, database replica, all WordPress content, all configuration — relocates to a new Australian city while users continue browsing the site. No DNS changes, no IP address updates, no client-side changes of any kind. We haven’t rebuilt or recovered. We’ve physically moved running servers between cities, live, over BinaryLane’s backbone, and the outside world didn’t even notice.

This capability is unique to BinaryLane’s architecture. Traditional cloud providers force you to snapshot, destroy, and rebuild in a new region — a process that involves downtime, manual reconfiguration, and risk. BinaryLane’s approach treats the entire Australian network as one fabric. Anycast routing means every server keeps its IP address no matter which city it migrates to — traffic is always routed to the right place automatically. There’s no DNS propagation, no client-side caching issues, no window of unreachability. The migration is invisible.

The practical impact is profound: our six-server WordPress stack can be relocated to any combination of Australian cities at any time, for any reason — not just disasters, but also performance optimisation, cost management, or regulatory compliance. Every server in the stack can be moved independently, without coordination downtime, and without losing a single database row or uploaded file.

The Database Layer: Replication Across State Lines

MariaDB asynchronous replication keeps a full copy of every WordPress table in Brisbane, updated in near-real-time from the Sydney primary. The replication lag is typically under one second.

If the primary fails, promoting the Brisbane replica to primary involves:

  1. Stop the replica’s replication thread
  2. Set read_only=0 on the replica
  3. Update wp-config.php on all web servers to point to the new primary IP
  4. Restart PHP-FPM to clear connection pools

This manual failover takes approximately 10 minutes. For a proof-of-concept at $64/month, that’s an acceptable trade-off. Production deployments could add Galera Cluster for automatic multi-master failover.

Partner Servers: Host-Level Redundancy Within Each Region

Multi-region redundancy protects against city-wide outages, but what about failures within a single data centre? In any cloud environment, multiple VPS instances can end up on the same physical host node. If that host suffers a hardware failure, every VPS on it goes down together.

BinaryLane solves this with Partner Servers. When you designate two servers as partners, BinaryLane’s placement system guarantees they will never be co-located on the same physical host. If one host node fails, only one of the two partner servers is affected — the other continues running on a completely separate machine.

🔗 Why This Matters for Our Stack

We have two web servers in Sydney (wp-web-1-syd and wp-web-2-syd) and two in Brisbane (wp-web-3-bne and wp-web-4-bne). By partnering the two servers within each region, we ensure that a single host failure in Sydney can only take down one of our two Sydney web servers — the other is guaranteed to be on a different physical machine. The same applies to our Brisbane pair.

This adds an additional layer of resilience within each data centre, on top of the geographic redundancy between cities. Even if we lose a host in Sydney and a host in Brisbane simultaneously, we still have two healthy web servers — one in each region.

Partner servers are configured through the BinaryLane control panel and trigger an automatic live-migration if two partnered servers are found to be co-located. The partner_id field is also exposed in the BinaryLane API, making it possible to verify partner status programmatically — adding host-level awareness to monitoring and infrastructure-as-code workflows.

Security at Every Layer

High availability means nothing if the servers aren’t hardened. Every node in this cluster has:

  • BinaryLane Advanced Firewall — Hypervisor-level stateless firewall with explicit deny-all rules. Only SSH (22), HTTP (80), HTTPS (443), DNS (53), and ICMP are permitted. Database servers additionally allow MySQL (3306) only from web server IPs.
  • SSH Key-Only Authentication — Password authentication is disabled across all servers. Only ed25519 keys are accepted.
  • Nginx Hardening — Server tokens hidden, TLS 1.2+ enforced, HSTS with preload, X-Frame-Options, X-Content-Type-Options, CSP headers, and hidden file denial.
  • Let’s Encrypt SSL — Automatic TLS certificates with HTTP-to-HTTPS redirect on every web server.
  • WordPress Hardening — File editing disabled, XML-RPC blocked, wp-config.php access denied at the web server level.

Content Synchronisation

WordPress media uploads land on whichever web server handles the request. A cron-based rsync job runs every 5 minutes from the primary web server, synchronising the wp-content/uploads/ directory to all other nodes using SSH key authentication. This ensures uploaded images and media files are available regardless of which server handles the next request.

The Stack

🔧 Technology Choices

ComponentTechnologyWhy
OSUbuntu 24.04 LTSLong-term support, proven stability
Web ServerNginxHigh-performance, low memory footprint
PHPPHP 8.3 FPMLatest performance improvements, OPcache
DatabaseMariaDB 10.11MySQL-compatible, excellent replication
CMSWordPress 6.xWorld’s most popular CMS, massive ecosystem
SSLLet’s EncryptFree, automated, trusted certificates
CloudBinaryLaneAustralian-owned, 6 regions (5 AU + Singapore), great API
AutomationClaude AI + MCPEntire stack built and managed by AI

Cost: $64/month for Full HA

The entire multi-region, highly available WordPress deployment costs approximately $64 per month on BinaryLane. That’s six servers with automated backups, spread across two Australian cities, with database replication and content synchronisation. Compare that to managed WordPress hosting services that charge similar amounts for a single server with no geographic redundancy.

Built Entirely by AI

Every server in this cluster was provisioned, configured, hardened, and deployed using Claude (Anthropic’s AI assistant) through BinaryLane’s MCP (Model Context Protocol) server and a custom SSH MCP server. From creating VPS instances via the BinaryLane API, to installing packages, configuring MariaDB replication, tuning PHP-FPM, deploying Nginx configs, obtaining SSL certificates, and writing this very blog post — the entire infrastructure was built without a human touching a terminal.

That’s the future of infrastructure: describe what you want, and AI builds it. Across cities. With redundancy. In an afternoon.

Note: This is a proof-of-concept deployment. Production environments should consider additional measures including automated database failover (Galera Cluster), Redis for session management, a CDN for static assets, and monitoring/alerting integration.