Make Your Webapp Shine With Varnish - Part 1

Part 1, what is varnish?

The varnish cache project is one you really need to get familiar with if you manage any high volume websites, it can mean the difference between a self destructing web app that buckles under it’s own load, and an apparently seamless web app serving 1000’s of concurrent connections per second with relative ease.

How does it work?

Varnish acts as a proxy server, in that when a use sends a GET request varnish will lookup in its internal database for a cached version and if it can not find one it will pass the request to the “back end” or in this case an apache server, varnish will then cache the response for subsequent accesses.

Now you may ask yourself why do you need this? this boils down to what you are trying to achieve with your web application, if your application is heavily reliant on dynamic content and regularly gets some 400 concurrent users for example, lets assume the following:

  1. 400 concurrent unique users
  2. Average page render time is 0.85s

The Math

Based on this if you were to place varnish in front of your application with a 60second ttl (time to live, length of time varnish will hold an object in cache):

  1. Varnish ttl 60 seconds
  2. 400/0.85 = 470.59/second
  3. 28235.29/minute
  4. Factor of reduction to “back end”: x28235.29

So in the example above simply by caching a page for as little as 60 seconds, the requests/minute as reduced from 28235.29 to 1, now even reducing the cache times to 10 seconds in this example would give a x4705.88 reduction.

How is this reduction a good thing, well time on cpu for one, varnish when configured correctly is very very fast, and even with an out of the box configuration it’s still going to be much faster than your dynamic web application.


So here ends a brief introduction to varnish and why you realy want to start using it, in the following parts we will cover

  • Configuration overview
    • brief overview of each sub section based on the 2.1 syntax
    • Advanced configuration
      • Load balancing
      • Failover handling
      • Raising cache hitrate
      • Pros and cons of each setup
      • Benchmarks