Francesco Altomare
Hasenheide 9
10967 Berlin (Germany)
Mobile: +49 151 65623284

Blog

Welcome back my uncompressed delivering Reader,

After our journey together through Security notions, today we’re back to explore optimized Delivery in the Web; when we say optimized, we mainly talk about the possibility for End Users to consume as much content of yours as they find interesting, without having to deal with all bottlenecks in the First and Last Mile. One of the most important factors in smoothly delivering to End Users – as performance is directly related to how much content of yours End Users will consume in the end – is ensuring that your Origin Infrastructure is up, running, and as optimized for Delivery as it can get.

When we say optimized Origin, we don’t just talk about having the right number of Server CPU Cores (or Servers in your Cluster, or Clusters in your Data Center, or Data Centers in your Private Network), Memory, Compression, fast Disks, etcetera, but also about ensuring you get most “bang for the buck” out of your very Hardware specifications. The same Server can undergo a great length of optimization techniques to make it lightning fast to respond to End Users, and this is what today’s Article is about; in order for a Server to quickly respond to End Users, not only does the Server need the appropriate available resources to respond requests – and fastly – , but, since many End Users often times ask for the _same_ Resource (take a Website’s Homepage for example), it also makes sense to let the WebServer(s) _not_ deal with all these requests for same content , especially if not dynamic. In one word, this is called _caching_ and, for although this is exactly one of the biggest advantages of adopting a Content Delivery Network for delivery, that still cannot fix a badly configured Origin Infrastructure. If your Origin Server takes 2 seconds to respond a CDN Server for a static asset request from an End User, your End User will still have to wait 2 seconds in the best case scenario, even with a CDN in the middle, to receive that silly static asset.

A CDN can only “patch” your Origin’s misconfiguration, so let’s have a look at what we can do to fine tune your Origin when it comes to caching. There are several Technologies and Codes for caching content locally – and selectively – , in most cases these Technologies fall in the Category of Reverse Proxies. Today we’ll touch one of the most famous of them, Varnish.

Varnish is a lightweight and highly efficient caching HTTP Reverse Proxy Server and its purpose is to reduce the time needed to serve frequently visited pages. Caching can be very desirable for a Site or Web Application that needs to serve loads of static content. Most of Websites fall into this group, while more dynamic Web Applications may or may not profit from caching.

Just like Squid (we’ll write more about it in our next Article), Varnish is a HTTP “accelerator” which keeps copies of the pages used by the Web Server. When the same page is requested, Varnish serves its copy instead of requesting it from the Server. This process is called caching and since it removes the need for the Web Server to regenerate that same page all over again, it results in a huge performance boost.

Varnish has the upside of being designed particularly for utilization as an HTTP accelerator (Reverse Proxy). It stores most of its cached data in Memory, making fewer disk files and fewer accesses to the File System than Squid, which is a larger and more multi-purpose Software Solution. By serving often-requested pages to Users from cache as opposed to asking for them from the root Web Server, Varnish reduces both Database accesses by the Server and CPU usage.

Due to this performance gain, it is possible to design Websites to work closely with a web cache and inform Varnish when a page needs to be removed from the cache in order to be regenerated.

Default Behavior

There are a few essential things to know about Varnish and its default behavior:
Varnish will automatically attempt to cache any requests that it handles, subject to special cases:
1) It won’t cache requests which contain cookie headers or headers that require authorization.
2) It won’t cache requests whose backend response indicates shouldn’t be cached.
3) It will just cache GET and HEAD Method requests.
Varnish will cache a request for a default of 120 seconds. Depending on the sort of request, this may need to be adjusted.
It will give any Expires/Cache-Control/Last-Modified headers that it gets from the backend, unless particularly overwritten.

Subroutines

It is likewise important to know that requests handled by Varnish are processed by a variety of subroutines:
vcl_recv: This subroutine is called at the beginning of a request. It chooses whether or not to serve a request, whether or not to alter a request, and which backend to use, if possible.
vcl_hash: Called to create a hash of request data to recognize the related object in cache.
vcl_pass: Called if pass mode is started. In pass mode, the present request is passed to the backend, and the response is also returned without being cached.
vcl_hit: Called if the object for the request is in the cache.
vcl_miss: Called if the object for the request isn’t in the cache.
vcl_fetch: Called after a resource has been retrieved from the backend. It decides whether or not to cache the backend response as an object, how to do it, and whether or not to change the object before caching.
vcl_deliver: Called before a cached object is conveyed to a client.
vcl_pipe: Called if pipe mode is started. In pipe mode, requests for the current connection are forwarded unaltered directly to the backend, and responses are correspondingly returned without being cached, until the connection is shut.

Actions

Each subroutine is ended by an action. Distinctive actions are accessible depending on the subroutine. Usually, each action relates to a subroutine:
deliver: Insert the object into the cache (on the off chance that it is cacheable). Flow will in the long run go to vcl_deliver.
error: Return given error code to customer.
fetch: Retrieve requested object from the backend. Flow will eventually pass to vcl_deliver.
hit_for_pass: Create a hit for pass object which caches the fact that this object should be passed. Flow will eventually reach vcl_deliver.
lookup: Look up the requested object in the cache. Flow will eventually pass reach vclhit or vclmiss, depending on whether the object is in the cache.
pipe: Activate pipe mode. Flow will eventually reach vcl_pipe.
pass: Activate pass mode. Flow will eventually reach vcl_pass.
hash: Make hash of requested data, and look for associated object in cache.

Traffic flow

Configuring Varnish well requires knowledge about requests and responses and how they flow through different subroutines. Take a look at the diagram below to see how everything fits together. If you need more info, please see the Varnish Wiki.
website_perfomance_meet_varnisch

How Varnish works: tweaking the Varnish

Regularly, the default Varnish setup lives up to expectation well. For the most part, there are just a couple reasons Varnish should be tweaked for your specific Site or Web Application:
When a request is received: Rewriting a request, for example, for removing cookies so the request can be delivered from cache.
When a response is returned from the backend: Adjusting a response, for example, changing the Time To Live so that it can be cached longer.
When a response is delivered:Rewriting a response, for example, overwriting backend Headers so that Browsers will also cache content.

The Pattern Application

You should know that the pattern is to make a few adjustments, then permit the default Varnish conduct to assume control. After executing custom VCL code, default VCL code will be executed for the given subroutine. Unless you specifically handle or indicate them, all situations, conditions and actions will fall through to default conduct.

Varnish as a straightforward improvement of Security

In addition to its utility for caching and load-balancing, Varnish can be used to conceal information about the Origin Server and along these lines prevent Hackers from using targeted attacks. More particularly, with Varnish you can hide, add or override any Header from the Origin Server. For instance, naturally, by default PHP uncovers its adaptation in Header statements like “X-Powered-By: PHP/5.4.4-10”. In order to hide this Header, edit the file /etc/varnish/default.vcl, uncomment the configuration part for vcl_deliver, and make it look like:
sub vcl_deliver {
remove resp.http.X-Powered-By;
return (deliver);
}

You can likewise edit Headers like the Web Server name and its version. By default, Apache sends a Server Header like “Server: Apache/2.2.15 (CentOS)”. It is possible to transform it to resemble that the Server is nginx (for example) in the event that you were feeling mischievous by changing the configuration for vcl_fetch and including:
sub vcl_fetch {
unset beresp.http.Server;
set beresp.http.Server = “nginx”;
return (deliver);
}

With VCL it is possible to filter any part of the Request and Response, making Varnish a very capable Web Firewall. For instance, if you need to filter GET requests containing PHP’s passthru function to prevent certain types of Attacks, change vcl_recv and include the following condition:
sub vcl_recv {
if (req.url ~ “passthru\(“) {
error 404 “No such file”;
}
return (lookup);
}

Once that takes effect, Varnish will display HTTP 404 Errors to anybody who tries to specify the passthru function in the GET Request.
Varnish can do even more to protect your Website. Truth be told, there are entire Projects dedicated specially to this, for example, Security VCL.
Still, Varnish is not intended to take the place of a Web Application Firewall and that is the reason why it lacks essential functionalities, for example,
POST Requests inspection. If you are searching for a complete WAF setup, read “Protect and Audit Your Web Server With ModSecurity”.

Conclusion

Varnish is a very effective and powerful caching HTTP Reverse Proxy. Of course, if left in default state, it behaves sensibly, so the best practice is to make tweaks and permit the Varnish default conduct handle the rest.
Despite the fact that its design can appear to be intimidating (there are lots of notions to absorb and options to look over), the ability to make small adjustments without needing to totally rewrite behavior makes it reasonable. The VCL Wiki as well as books provide tons of information, including detailed descriptions of certain actions. You can learn how to install the Varnish, configure the Proxy and tweak it as well as get to know its subroutines, actions and patterns.

I trust that you can now see why Varnish is so popular. It is broadly considered as the best Web Caching Proxy these days. And with so many tutorials online, you should be able to install it by yourself fairly easy and make tweaks in order to boost the performance, security and availability of your Website.

  • Linked In
  • Google

Tags: , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *