Summary
I have experimented with this blog and have set it up to run in two different EC2 regions. The original EC2 instance with RDS was running in the EU-CENTRAL region (Frankfurt) and the second EC2 instance was set up in the US-EAST (N. Virginia). Comparatively speaking, the performance test results (as scanned from the US) were better with two EC2 instances as compared to one Frankfurt-based instance with CDN. However, overall, I found performance to be reduced and abandoned this set up due to its complexity and difficulty in managing it. For this reason, I am not including detailed instructions below, just the main steps that are needed. You can find instructions elewhere if need be.
Initial setup
The initial setup was for an EC2 instance in the EU-CENTRAL AWS region, connected to an RDS instance in the same region (for database hosting). The domain was managed by Route 53.
Amazon EFS
The purpose of EFS is to enable storage on the EC2 instance that can be made accessible to EC2 instances in other regions, since at this time there is no means to sync EC2 volumes directly.
I have created an EFS file system and configured it to connect to the EC2 instance in EU-Central region. I have then moved a number of WordPress folders onto it and created symbolic links in their original location.
Initially, I moved the uploads, themes, plugins, upgrade, w3tc-config folders from wp-content. I found that this slowed down WordPress significantly (especially on the Admin side). As a result, I opted to only move the themes and uploads folders to the EFS file system.
The next step was to create a replica of this file system in the US-EAST region, which is easily done in EFS properties.
Amazon RDS
We already have an RDS instance connected to our original EC2 instance, which you can do if you are hosting the DB server directly on the instance itself.
In the RDS settings, I created a read-only replica in the US-EAST region. Again, here AWS allows us to create replaces in a different regions. To note that there is a one minute (or so) delay in the replication.
Amazon EC2 AMI
At this stage, I created an image from the EC2 instance, copied it to the US-EAST region and launched an EC2 instance in this other region from the AMI.
Once the EC2 instance was up and running, I adjusted the settings of the US RDS intance to connect to the EC2 instance, and the same for the US EFS file system.
Next, I updated the SQL settings on the US EC2 instance (in wp-config.php) to point to the new RDS host name and updated the mount details for the EFS system to point to the US EFS host name. The EFS was set in the server to load upon boot hence making those WordPress folders available to the system.
Amazon Route 53
At the moment, we had two EC2 instances running on the same (or similar) database and sharing the wp-content themes and uploads folders on EFS.
I updated the DNS A records from the default one to a Geolocation one, indicating the US based EC2 instance as the source for any calls from North and South America, and the EU-based EC2 instance as the default source.
Amazon ElastiCache
I realized at this stage that the EU EC2 instance was configured to connect to an ElastiCache Memcached server for object and database caching (via the W3 Total Cache plugin). This server was also EU based. So, I created a ElastiCache instance in US-EAST and updated the W3 Total Cache configuration (luckily stored in a file on the EC2 instance in the US) to the new host name.
Results
At this stage the configuration was functional. If I connected from the EU, I was connecting to the EC2 host and if I connected (via VPN) from an IP in the US, I would get the US EC2 host. Same was confirmed via ping. However, the US EC2 instance was generating significant errors (in the apache error log) due to the read only nature of the database it was connected to.
Previously, when I tested the initial (EU) site with Pingdom from a Washington D.C. testing point, the “speed” was listed as 6.8 s. This is due to the site being located in the EU. A similar test from Europe returned 3.4 s. After this change, the result of the test from Washington D.C. was of 4.2 seconds, so there was a significant difference.
I also experienced some strangle login issues, i.e., I was able to login into wp-admin on Chrome and not on Firefox (and later vice-versa). I got to the point that I was no longer sure to which server I was connected to. I got completely confused, and this with two instances only.
Conclusion
In the end, I reverted to the single instance setup and opted to try to optimize this instance rather than having multiple instances to worry about. But, I still think there is a possibility for further optimization or reconfiguration to make such a setup, or a similar one, work.
Leave a Reply