On moving static content to S3, ngx_pagespeed, and overall opinion on my setup

Hi guys,

English is my second language. Sorry for any mistakes.

First of all, thanks for the great tutorials and content on rtCamp. Thanks to you guys I've set up a NGINX server that is, at this point, performing way better then my old Apache server.

But just like everyone else, I would like to improve even more. Squeeze even more on a single server, and improve even more my loading speed.

Here is what I have so far:

Nginx with fastcgi_cache serving content from tmpfs;

PHP-FPM with APC opcode;

Wordpress with NGINX Helper plugin, purge is working as intended;

Ngx_Pagespeed doing its magic on .js/.css files and on some images - dont know why it seems to ignore a small percentage of the images on the page. Anyway, I cache pagespeed optimized stuff to tmpfs and have zero cache miss.

I use debian wheezy with dotdeb repo. I run everything on AWS.

I did siege testing and nginx+fastcgi_cache can take the heat up to dozens of simultaneous users without delays between requests. It is impressive.

#

I think the next step for me would be to move all static content, specially images, to S3. I dont worry about CDN since all my hits come from a single country (Brazil) - could be wrong.

I would like some opinions on this. I think I can do this with W3-Total-Cache, and for newer websites just use some plugin that uploads directly to s3. What I would loose is ngx_pagespeed lossness compression (images) and minification etc (js, css).

What do you guys think? Is there any good way to bulk optimize images and replace the source so that optimized images get moved to s3? Do you guys think the benefit is worth the effort? I mean, how faster can the load time for the end user be agaisnt serving static content with nginx from ram? Of course there is the scalability point that also needs to be considered. Whats more scalable than s3?

I aspire better load times, and a more scalable solution. Using S3 for the heavy lifting is, IMO, the best way to go. I could squeeze a bunch of websites on a single ec2 small instance, fast_cgi_cache would do the job serving up to hundreds of sim. users. S3 would serve static content.

Why I need this: Last year I had one of the websites I host was featured on sunday nights brazilian most watched program (Fantastico - nation wide). Couple of seconds after the news report, analytics showed 20.000 sim. users on the website. Of course apache could not handle it. I had to turn on a load balancer and turn a couple of servers on, plug all on a RDS heavy and hope for the best. For the next couple of days I was getting 1000%+ hits 24 hours a day. Had to keep the load balancer on and bill the client a fat invoice. Client lost alot of conversions on those first hours of spike that I could not hold. This cannot happen again anytime with this website and with others I manage.

Any help is welcome. Best regards.

@Dominique Dutra

Sorry for my delayed reply. I needed sometime to read your long post. I was very busy for last 2 weeks.

Now, you have done impressive setup so far.

For CDN

I think better than going for Amazon s3, use Amazon Cloudfront http://rtcamp.com/tutorials/cdn/amazon-cloudfront-wordpress-w3-total-cache/

In your case, I would have gone with cdn.net and enabled few South-America/Brazil locations which has low per-location price. Nothing could beat cdn.net when it comes to optimize your money-spending!

Image Tweaking

If you really want to reduce image-size, smush.it is first option - http://wordpress.org/plugins/wp-smushit/

If you have images larger than 1MB, try https://github.com/doda/imagy

Linux Tweaking

Next you can tweak - http://rtcamp.com/tutorials/php/fpm-sysctl-tweaking/

You may notice that nginx and php are refusing to handle traffic-load even your server has spare CPUs.

Thanks alot for your time.

I'll be using cloudfront or akamai. Still deciding. Akamai has alot of edge locations on Brazil, cloudfront only has one.

And I think I will use imagy, thanks for the tip. Best regards.

I think akamai will cost more.

On cdn.net, they have Miami location (close to south america) for $25/TB.

cdn.net also has brazil location for $125/TB (5-times costly).

Cloudfront will cost you I guess $120/TB + some extra charges for HTTP requests.

I think South America has high cost for electricity and Internet bandwidth as I couldn’t find less than $100/TB pricing anywhere.

As a personal suggestion, I would stick to Miami edge-location rather than paying 5-times to reduce latency by few miliseconds.

Again, you know your business well so there might be some reason for your choice. :slight_smile:

Rahul,
Thanks.

In regards of cdn.net - it is out of the question. My ec2 is on Brazil region (Sao Paulo) no point in sending my users to Miami if I can serve from ec2. Cloudfront has only one edge location on Brazil so it is also out of the question unless I need to offload traffic quickly, then I am one Domain Rewrite (ngx_pagespeed) rule away from offloading.
Akamai would be my best option because they have a lot of edge locations in Brazil, problem is the price. I cannot buy directly and Brazilian reseller requires a 1 year contract + min traffic that I do not have. I tried contacting a re-reseller - its a company that buys from the Akamai reseller and sells you per GB.
They charge U$ 230 per TB. Thats insane. So akamai is also out of the question.

I am still experimenting with different options. Best regards.

In your case, I will recommend NOT using CDN. I wasn’t aware that your origin/main server is already in Sao Paulo (Brazil).

As your maximum audience is near your main (ec2) server, serve files from your ec2 instance.

As long as CPU load is under control, you no need to worry. If you find ec2 overloaded, go for bigger instance.

You can also enable open_file_cache - http://rtcamp.com/tutorials/nginx/open-file-cache/

Lastly, if you are planning to use CDN to overcome network-connection limit per domain by browsers, you can simply create fake CDN subdomains and point them to your server. I never used this method but its there!

Hello Dominique,
Well done! - I’m trying to get all of these working at the same time as well and I am struggling … lol …
Would you be prepared to share the salient points of your nginx virtual host config file please?
This may not help you in Brazil, but you could try it for free to test: I tested Incapsula.com - proxy in front of a website and Incapsula not only kept out all the nasty bots & stuff out, but it could go from idling to serving several thousand simultaneous users without a noticeable delay in that case. They don’t have end points in Brazil though. But as insurance against a huge traffic surge they are nice.

@woody-woodpecker-963 I tried to find cheap/affordable CDN in Brazil for a client but it did not work.

If you know any CDN which costs less than USD 50 per TB, please share. For this particular client of ours, we are using North America edge location.