What Happens When an Image Request is Made

By design, the experience of using imgix is pretty seamless. Once a Source for photos has been set up, you simply put the parameters for the transformations you need into a photo’s URL and it is almost instantaneously served to your specifications.

Yet this seeming simplicity actually hides a lot that’s going on under the hood. Requests are rendered and then fulfilled by a robust content delivery network with a sophisticated caching layer. This means the request actually goes through quite a few more steps than you might expect.

There are big benefits to this sophisticated approach—it cuts latency, improves stability and maximizes performance. Yet it also has some implications for how imgix is best implemented. For that reason, we thought it might be useful to give an overview of what happens at each stage in the process.

The imgix architecture

First, let’s take a look at the pieces involved in responding to the request. At a very high level, a typical setup with imgix is split into three layers:

  1. imgix CDN (Content Delivery Network): A network of globally-distributed edge nodes. Currently, requests are handled by CDN nodes in 16 different countries.

  2. imgix rendering cluster: Where the magic happens. This is our high-performance image processing architecture, which carries out the render operations to transform images to fit the parameters requested.

  3. The Source: This is the place where your master images are hosted, and where imgix will initially pull from when a request is made. Typically this is an Amazon S3 bucket, but various flavors of self-hosting are also possible.

A request is made

Diagram showing flow of an image request through imgix"

Let’s say a user in London views a page with a photo of a cute puppy on his iPhone 7. The original of this image is stored in S3, and measures 4000×3000 pixels—large enough to print on a billboard and overkill for the iPhone’s 1080×1920 display. But the website owners use imgix, and they’ve set it up so that the photo is requested with the w=300 parameter. This image has very recently been uploaded, so this is the first time it’s ever been requested.

In order to get the lowest possible latency, imgix will automatically try to fulfill this request from as close to the end user as possible. This time, since it’s dealing with a new image, it will go through the entire request pipeline:

  1. First, it will attempt to pull a w=300 version of the puppy picture from the node in London. When it sees the London site doesn't already have the object it needs, it then it forwards the request over to the CDN shield, which might have already stored the object on behalf of one of London's peer locations.

  2. Since the necessary photo isn’t yet in the CDN, the request then moves to the rendering cluster. The rendering cluster will request the full-size image from the source (Amazon S3), and store it in an origin caching layer. It then performs the necessary operations to resize the picture to be 300 pixels wide.

  3. The rendering cluster sends the newly resized version to the CDN shield where it flows to the London edge node. It is then cached and served to the end user.

  4. A puppy appears on the iPhone screen, the user goes “awww,” and everyone is happy.

All of this happens fast—usually too fast for there to be a perceptible lag for the user. And because of our opportunistic caching, subsequent requests are even faster.

Any time a w=300 version of this picture is requested from this point on, it’s served from the most optimal location (in relation to the end user) that contains the cached copy, to cut latency.

Something to note when discussing the CDN layer is that we charge a single flat rate no matter where the image is being delivered, unlike many providers who charge more for certain nodes. This means customers needn’t worry overly about where their traffic is coming from, at least in this context. Costs will stay the same.

It’s also no longer necessary for imgix to touch the master image in S3 once an initial request is made. Even if a new derivative is required, imgix can simply perform the necessary transformations using the full-sized image cached with the rendering cluster. We respect origin cache-control headers, and this is even configurable at the imgix Source. If the expiration date on content passes, we will revalidate or even re-fetch the content. This creates a good balance, keeping latency low while ensuring that imgix is always serving the freshest content.

What it all means

This caching and CDN layer has a few implications. Since it’s a bit more complex than serving a raw image directly, the baseline latency is slightly higher the first time an image is requested when serving through imgix—but only the very first time. Of course, this is the classic CDN tradeoff. Most customers will be served from cache most of the time, so it’s a good bargain to make in most cases. If slowing down 10 requests lets you speed up 10 thousand requests, why wouldn’t you?

Caching aggressively minimizes unnecessary work, and that makes us more resource-efficient. This lets us deliver highly customized images all over the world, at a price point comparable to less sophisticated systems that don’t offer as many options or as much flexibility. It also makes a separate CDN unnecessary, because imgix provides a best-of-breed CDN on its own, with no additional setup.

Of course, our goal is for our customers to not have to think about any of this. Add a few parameters, get a customized image—we think it really should be that simple. You’ve got enough on your plate without worrying about how your images are delivered and what it might cost.

For more information about imgix's caching and delivery, see our CDN page.

Stay up to date with our blog for the latest imgix news, features, and posts.