Easy engine slow on many sites

Hi guys,

I looked into it for a week and can’t figure out what the issue is. I have an AWS setup with separate rds and relatively big server. Static files offloaded to CDN and Redis for page caching.

Here’s the problem. There’s a significant slowdown of the setup with every new Virtual Host. I found a couple of DDoS throttles on EE setup and I increased the number of requests. However, it’s still significantly slower. How?

One setup for 1 website loads entire site below 1 second. Add 10 more different sites, the same site loads between 3-5 seconds (based on newrelic).

First you’ll say that it’s a traffic issue. So I thought so I duplicated my setup and connected via hosts file from my computer to make sure that there’s only my traffic coming to the server. And the same speed.

I feel there’s some kind of setup somewhere on EE (or maybe on default nginx?) that makes it slower with growing vitual hosts.

Any suggestions would be much appreciated!

It’s first time I hear about slow server with “high” number of vhosts.

One customer of mine has a not so large VPS with 62 WPs, and there is no slow down.

The same server once had almost 200 vhosts, but my client merged a lot of websites, killed some domains, redirected a few other… Never complained about high loading times or something.

Of course, we don’t use AWS, RDS, nothing, just good and old local MariaDB.

It’s new to me too. And as you said it’s not big amount of websites at all! One thing that I just started suspecting is NFS. All WordPress files are actually on NFS in order to allow autoscaling. I wonder if that’s something that might affect it? I’ll try to switch off nfs and move www to local harddrive and see how it goes.

For sure, NFS always has been an awful bottleneck for me. I’m curious about what you’ll find with your testing.

Could be the reason.

Ok fixed, it was NFS. It is quite disappointing that such a progressive technology as AWS doesn’t have it figured out yet, even worse, on official WP setup they recommend NFS.

I talked to AWS support, also found this: https://forums.aws.amazon.com/thread.jspa?threadID=235976&start=0&tstart=0

Turns out that with growing amount of slow files EFS (AWS version of NFS) slows performance for web servers on metadata IO. So I need to rebuild my server with local storage and create a script copying across load balancing. Thanks for help!

An ideal way to handle this is to store media uploads on S3 using a plugin like https://github.com/humanmade/S3-Uploads or https://deliciousbrains.com/wp-offload-s3/ This will help make your server stateless so you can spin up multiple servers behind a load balancer without trying to keep all the servers in sync.

As per original description we do have S3 and all uploads are offloaded. That doesn’t make server stateless though, it’s WordPress we’re talking about with many websites and manu plugins. Autoupdates for plugins, minor for WP etc plus manual updates and so on causes (as per actual AWS whitepaper for WP) that NFS is an ideal solution. Except not for many sites.

Long story short the solution we went in is to go back to EBS and create master drive and use AWS and WP CLI mix to do any updates across the network.