How can we scale better?

Hi,

I’m using EasyEngine on a Woocommerce site. The problem is that upgrading from a 512MB/1core VPS to a 4GB/4core VPS didn’t seem to make much of a difference. I have increased memory allocated to OPcache & memcached, increased the number of php servers, tweaked mysql with mysqltuner. Page cache is done with a WP plugin - Hyper Cache. Cloudflare is also activated.

However, when we get a traffic spike the CPU is going to 100%, php processes are using most of it, and the site gets real slow. Google Analytics shows around 400 users online when looking at the real time stats.

I refuse to believe that this is the best we can squeeze out of that machine. Somewhere, something is not right and it can be improved.

Anyone has any ideas?

@AndreiChira

Did you try ee debug command ? I’m recommand to use following command and minitor your website slow queries

ee debug --php --fpm --mysql --import-slow-log-interval=1

You can view mysql slow queries at https://example.com:22222/db/anemometer

The logs are pretty clean, no slow requests and no slow queries.

I’ve upgraded the server to 64 GB RAM and 20 core CPU and managed to get around 1000 users online. CPU went to 100%.

Maybe I’m wrong but such a powerful server should be doing better, right?

@AndreiChira

I don’t know how Hyper cache works, but EasyEngine recommends fastcgi cache for page cacheing.

@harshadyeola It’s a very lightweight plugin, recommended for shared hosting with low resources. I didn’t see no reason why it should work well on a VPS also.

I’ll try fastcgi_cache too.

@AndreiChira I don’t remember when or where I found a problem with HyperCache. I was testing out multiple scenarios for caching (ex: WP Rocket, W3TC(pro),HyperCache), and had the same problem. It seems that the problem was from HyperCache not being able to write the cache fast. I found then the problem was common on their forums. Check there maybe …maybe, you will find the answer.

For now, I’m testing HHVM with php5 as fallback with MariaDB 10.1 & nginx with fastcgi cached & memcache. You can add pagespeed to the stack also. I got really good results from that configuration. I know you are a Debian fan, but with Ubuntu had some really impressive results. For now I hope I will have time to test Redis too so I can remove W3TC (pretty buggy plugin) and I need to find a good object-cache plugin.

I just set up my first woocommerce instance. It is definitely a heavy bunch of code and puts a big drag on php. I have things running acceptably by using a combination of

nginx fastcgi-cache for whole page caching - but note that this will not help you much with woocommerce unless you are very careful in setting up the right cache exclusions in your nginx conf file. see this

HHVM - running php code instead of php-fpm. note that there are some issues with woocommerce in places. i posted a workaround here.

redis - exploring redis for object caching (still looking for the right plugin/dropin for redis) instead of memcached + W3 total cache for object caching. not doing any database caching other than mysql’s native query cache. UPDATE: see comment below for working redis dropin solution.

the whole set up is still being refined and it needs some babysitting, but for the most part works and works pretty fast. when i have everything stable, i’ll try to write up one post that details everything with links, config files, etc.

1 Like

Thanks ! Quick question… I played with wp-redis-cache from Benjamin Adams (https://github.com/BenjaminAdams/wp-redis-cache), is it the same stuff as the plugin mentioned above or… its better?

That plugin does page caching as well as object caching. I prefer to use nginx for page caching. Forgot to mention this in my list of stuff above, but nginx-helper from rtcamp is a must-have for using nginx fastcgi-cache.

2 Likes

so today, for redis object caching, i am using a single file “dropin” instead of a plugin. actually it is part of a plugin but the plugin itself croaks when i try to use it but you don’t need the whole plugin anyway, just the dropin part.

here’s what i did.

  1. get https://wordpress.org/support/plugin/wp-redis

  2. don’t install it, but instead, just copy the file object-cache.php out of it

  3. (optional, but strongly recommended if you are running more than one wp site on your server) modify your wp-config.php file to include a line like this: define('WP_CACHE_KEY_SALT', '#9<7@0@4v^Fi|OR{k8XvIA7OFz?cXXKawC+nywmP9y$_Z%4hm4uNn$*1r$-{en#S'); <== of course do NOT use that salt or the same one everywhere - you need a different unique salt for each site. you can get new ones here: https://api.wordpress.org/secret-key/1.1/salt/

  4. put object-cache.php into your wp-content directory and make sure the permissions let your webserver read it.

it seems so far (few hours) to be working well. you can see that it is working by running: redis-cli info memory and you should see used_memory numbers going up over time as redis fills its cache.

1 Like

Thank you guys. I’ll try fastcgi cache and HHVM.

I have a question about HHVM, how do you know how much memory is going to use? I mean, with php-fpm I can set only 5 max processes on a low memory machine. Can you do that with HHVM?

Memory use is the one real unsolved problem with HHVM, and technically at the moment the answer to your question is that you cannot know how much memory it is going to use because there are multiple unsolved slow memory leaks. The way HHVM works is that there is only one process. Under load it will internally spawn threads up to the number of cores in your CPU, but it’s all managed internally and you don’t see those processes like child php-fpm processes.

There are a lot of different memory settings for the HHVM config and they are a bit arcane. You might want to read these:

Until the memory leak issues are solved, I am running a script via cron that checks the amount of memory that HHVM is using and restarts it if it gets over a threshold. I run it every ten minutes. Here’s the script: https://gist.github.com/pjv/d5012a4a86416c2a9998 - if you want to use it, you’ll need to get ps_mem (link in the script) and you’ll want to set the 2.5 GB to whatever is appropriate for your environment. 2.5 GB is what I am using on my 8GB server.

All of these issues and work-arounds might make it seem like HHVM isn’t worth it yet. But you should try it before you decide that. It’s really, really fast.

One more thing: do not run W3tc with HHVM. It will make HHVM crash.

Do you have a remote server for MySQL?

Check this as well:

@JDdiego I thought about separating mysql from php and have 2 servers, I’ll test to see how it goes.

Hi @AndreiChira

It’s been a long time, and we haven’t heard from you. It looks like your issue is resolved.I hope your decision of using remote server for database worked out for you.

I am closing this support topic for now. Feel free to create a new support topic if you have any queries further. :slight_smile: