Using Jekyll For Blazing Fast Websites

When I first started my blog, I used Tumblr. I didn’t choose it for the social integration or community, but rather to offload the management of servers to a third party.

My decision was justified when one of my posts, Captchas Are Becoming Ridiculous, hit the top spot of Hacker News. Over the course of two days, over 22,000 visitors visited my post. It’s common to see the servers of front page Hacker News posts struggle or even go down entirely due to the surge of traffic, but thanks to Tumblr, my website stayed online the entire time.

But while Tumblr was resilient to sudden surges in traffic, the service has had its struggles and periodically went offline. There’s several huge, day long gaps in my Analytics– a sign I need to move to another platform.

Moving to Jekyll

Moving to Jekyll gave me the opportunity to give my entire website a facelift and bring the design of both my portfolio and blog in line. Building the new theme, which is based on Bootstrap, and learning how to use the Liquid template system and Jekyll took less than two working days.

Like before, I’m not managing my own servers. While I love Heroku, I also wanted to be able to scale and manage a sudden spike in traffic. Because this website is now static HTML, a single Heroku instance would probably have been fine, but I took the opportunity to experiment with some new technology.

Amazon S3 and CloudFront

This Jekyll site is hosted on Amazon S3 and the Amazon CloudFront CDN. As a result, it’s like my website and blog is being hosted on multiple servers around the United States and Europe rather than a single instance as it would have been if I hosted with Heroku. This allows for the website to be blazing fast, no matter where my visitors are on the planet.

CloudFront has a limit of nearly 1,000 megabits per second of data transfer and 1,000 requests per second by default. If I were to need additional capacity, I could always request an increase, but at these maximum rates, I would be handling over 300 TB a data month and 2.6 billion page views. Somehow I don’t think I’ll ever hit that.

Performance Numbers

By moving to CloudFront, my blog received a massive performance upgrade. According to Pingdom’s tool, my blog loads faster than 97% to 100% of websites, depending on the page visited.

Home Page

Prior to adding my portfolio to the home page (the images are extremely large and unoptimized: ~1.9 mb total, and I will be reducing them in the future), I was getting load times of 160ms from the Amsterdam server. Without Javascript, the load time decreased to a blistering 109ms– literally as fast as a blink of an eye. By Pingdom’s numbers, this meant my website was faster than 100% of websites. From New York, with Javascript, the website loaded in approximately 260ms. Not bad, but significantly slower.

I am currently evaluating the trade off of having jQuery and the Bootstrap Javascript file simply for the responsive collapsable menu (try resizing your window and look at the menu– clicking the icon will toggle the menu, which is powered by jQuery). jQuery is approximately 30 kb, and I use very little of its functionality as it stands. The Bootstrap script isn’t as bad, weighing in at 2 kb (I stripped everything but the collapsable plugin out). I’ll likely leave it in because it will give me flexibility in the future, but I really wish Zepto worked with Bootstrap since it is a third of the size of jQuery.

With images, my page loads in approximately 370ms– pretty good for how large the images are. It takes over 250 ms for the fastest image to download, so if optimized, I’m confident I’ll be able to decrease the load time to under 250ms once again.

Blog Home Page

The blog home page has no images– only the Elusive web font for the social media icons in the sidebar. The font weighs in at ~225 kb and adds nearly 50ms to the load time, for a total of approximately 300ms.

Blog Content Page

This is the most important page– the majority of visitors to my website arrive from Google and land on one of these pages, so it must load quickly.

Thanks to Disqus, I’m seeing load times of 650ms, which is significantly worse than any other page on my website. Unfortunately, there’s nothing I can do about this, and I feel that the ability to have a discussion is important and worth the extra load time.

Causes of Latency

The biggest cause of latency is external media, such as large photographs and the Elusive web font. To further optimize the page, I’ll have to remove the Bootstrap icon set and the web font, opting for retina-resolution images for the social media buttons. To prevent additional requests, I can inline-embed these images into the CSS using base 64 encoding or use a sprite sheet.

Disqus also contributes significantly and causes over 70 different requests to an external server to load stylesheets and content. When compared to my website, which only makes 7 requests (including all web fonts and Google Analytics), you can see why the load time is significantly higher on blog articles with comments.

It’s also important to note these numbers are from Pingdom’s test– a server with very high bandwidth. It will be significantly slower for those on 3G connections, but the gains will also be apparent.

Optimizations

These load times were’t achieved by chance, but rather a large build process I wrote in a Makefile.

After running make, Jekyll builds the website. I use the Jekyll Asset Pipeline to combine all of my Javascript and CSS files into single, minified scripts and stylesheets. The Javascript is set to defer as to not block the page render.

After this is done, CSS is compressed using YUI and all HTML, Javascript, and CSS is GZipped.

Finally, using s3cmd, all of the GZipped content is synced to Amazon S3 with the proper headers (content-encoding as well as cache-control: maxage) and the CloudFront distribution is invalidated. Unfortunately, S3 doesn’t allow for the Vary: Accept-Encoding header to be set (technically, this is right since the server doesn’t actually vary GZipping based on browser capabilities), so many page speed tests will complain about this.

After the invalidation propagates, the website is then viewable with the new content.

By offloading all content processing (and GZipping) to my computer during build time versus during the actual request, as CloudFlare or another caching layer might do, we can shave off a few milliseconds.

I’m extremely happy with this new setup and it also gives me new flexibility I didn’t have before with Tumblr, including the ability to enhance some of my articles with cool media and custom, per article layouts (a la The Verge). When I complete one of these articles, I’ll be sure to do another writeup showing how it’s done.

Subscribe to my mailing list

Get my latest posts delivered directly to your inbox
 
 

Leave a Reply

Your email address will not be published. Required fields are marked *