December 2014

Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      










« Hosting our ASP.NET Web API project on Windows Azure – Part 2 | Main | Announcing DevBlogs from ADN »

April 20, 2012

Using Windows Azure Caching with our ASP.NET Web API project

As mentioned in the last post, while working on deploying our web-site and its related services to Windows Azure, I started to chew on the economics of Azure hosting. This is especially relevant as I start to see my free 3-month subscription’s resources being burned through by all of you checking them the links in the last post. ;-)

Here’s what I found… “extra small” instances are a mere sixth of the cost of “small” instances (not taking into account the 6-month pre-purchase discount on small instances, admittedly), which got me thinking: if I can reduce the resources needed for each instance, perhaps using a central caching service, then that would make financial sense.

For example: say we chose two small instances per month. According to the Azure pricing calculator, that would cost $180 per month (or $143.98 with the 20% reduction for a 6-month plan). If we could scale down to using 2 extra small VM instances, that would cost $30 per month. Of course we’d need to add the cost of the caching service (which currently runs at $45 per month for 128MB, which is more than enough for our needs, size-wise), but it still works out well, cost-wise. And also makes it really cheap for us to scale upwards, depending on usage, as one cache could server multiple VM instances.

So let’s look at what our web-services are currently doing, to see what opportunities we have for performance optimization.

As mentioned previously, the service currently does a lot of repeated operations, which isn’t optimal. Every time it gets a request to generate the 2D Apollonian gasket or 3D Apollonian packing for a given radius and recursion level, it calculates the results again. And actually the results for different radii – but the same recursion level – are basically the same: it’s really just a matter of multiplying the unit result (i.e. for a radius of 1.0) by the specific radius passed in (which means adjusting the X, Y, Z and radius values accordingly).

Assuming we only want to support 10 “levels” for both circles and spheres, that means we could effectively store 20 records (with those for the higher recursion levels getting quite big, of course) in a cache, and then very rarely need to call into our CPU-consuming, core F# algorithms. Which hopefully means we can then scale back the size of our VM instances, as suggested earlier.

I briefly mentioned two optimization approaches, in this previous post: memoization and caching.

Memoization is a technique that stores the results of pure function calls and reuses them where needed. This F# snippet (based on this academic paper) does this very well for the Fibonacci sequence, for instance. This would actually give some pretty cool benefits, especially as we have recursive operations that are highly repetitive in our code, but on the other hand would not give us benefits across server instances (and this is certainly something to consider if your application ends up scaling, over time).

Enter caching… while this doesn’t give us the internal efficiencies of memoization – meaning within the execution pass needed to generate the results for a single request – caching would allow us to store the results for a given request and simply not have to generate them again the next time they were needed. And if we implemented caching – which really does seem to make sense, given our desire to provide a scalable service – we would ultimately only see any benefits from memoization in the pass needed to populate the cache, anyway.

So let’s go ahead and implement caching for our service…

Caching in Windows Azure is really easy to implement. The main steps to follow can be found in this very straightforward set of instructions [update: with the new distributed caching capability this link now points to something else… these instructions show how I set up caching for this service]. I won’t repeat these steps, here – as the provided documentation is really good – but I will make a few comments before we take a look at the client-side changes to make use of this cache.

The caching service is not currently available in all Azure data-center locations:

Creating our caching service

It’s for this reason I ended up choosing Western Europe to host my service, in the last post, as at least I knew the caching service would be co-locatable with it.

It’s also worth mentioning that if you’re using the caching service from your local ASP.NET service – which still works, but doesn’t make as much sense, as there’s communication overhead – then you may hit a different category of errors than you would when deployed in the cloud.

For instance, during integration of the caching service into my code, I regularly hit a DataCacheException exception. This error can occur when you have large – in excess of 8 MB – records in your cache (which I know isn’t the case, as our level 10 circles results take up about 4.3 MB in a text file, and so shouldn’t hit the limit, even allowing for some inflation when in memory), but can also occur when you get a communication timeout (>15s). This was almost certainly the case with my local web-service attempting to save to the central cache, but went away (I believe!) when everything was hosted on Azure.

I should also point out that I’ve now gone ahead and reduced the footprint of the JSON returned by our service (something I mentioned in an update to the last post): I never really liked having “Curvature” as a field name for every circle – and “Radius” for every sphere – so I reduced them to a single character (“C” and “R”, respectively). Which brings the data transferred down by a not-inconsiderable amount (~8% for spheres and ~14% for circles), so it was worth doing if at the expense of human-readability of the JSON output.

Now for the code to make use of our caching service… hopefully it’s reasonably obvious – even in F# – what was needed to use the service from a client (even if that client is inside our own services’ code :-) perspective. I've added some comments to help explain what’s going on.

Here’s the adjusted CircleController.fs file:

namespace FsWeb.Controllers

 

open System

open System.Net

open System.Web.Http

open Microsoft.ApplicationServer.Caching

 

// A more compact profile for our data: we've shortened the field

// names to keep the data store/transmitted down

 

type Circle (X, Y, C, L) =

  member this.x = X

  member this.y = Y

  member this.k = C

  member this.l = L

 

type CirclesController() =

  inherit ApiController()

 

  // Get the cache - this is currently called for each request,

  // it would be better to place this somewhere where it's called

  // only once

 

  let cacheFactory = new DataCacheFactory()

  let cache = cacheFactory.GetDefaultCache()

 

  // A function to help scale our unit result by a given radius

  // (for circles we need to divide - rather than multiply - the

  // curvature value)

 

  let circleOfRad rad ((a:float, b:float, c:float), d:int) =

    new Circle(a * rad, b * rad, c / rad, d)

 

  // Use a unique prefix to help us identify the "circles" results

  // in our cache (as it's shared across the two services)

 

  let cacheId steps = "Circles" + steps.ToString();

 

  // GET /api/circles/rad/steps

 

  member x.Get(rad:double, steps:int) =

 

    // We'll now reject calls for recursion levels above 10

 

    if steps > 10 then

      raise (new HttpResponseException(HttpStatusCode.BadRequest))

    else

 

      // If the cache doesn't contain the unit results we need,

      // generate the

 

      let cached = cache.Get(cacheId steps)

      if cached = null then

        let results =

          CirclePackingFullFs.Packer.ApollonianGasket 1.0 steps

        try

 

          // And then attempt to add them into the cache, with

          // an expiration time of two weeks hence. Note that we

          // serialize an array: serializing an F# list quickly

          // results in a stack overflow exception

 

          cache.Add(

            cacheId steps,

            results |> List.toArray,

            TimeSpan.FromDays(14.)) |> ignore

        with

          | _ -> ()

 

        // Return the results, scaled by our radius

 

        results |> List.map (circleOfRad rad) |> List.toSeq

      else

 

        // If the cache contained the required unit results, cast

        // them to the right format and map our scaling function

 

        cached :?> (((float * float * float) * int) array) |>

          Array.map (circleOfRad rad) |> Array.toSeq

And here’s the analogous – and very similar – SpheresController.fs:

namespace FsWeb.Controllers

 

open System

open System.Net

open System.Web.Http

open Microsoft.ApplicationServer.Caching

 

// A more compact profile for our data: we've shortened the field

// names to keep the data store/transmitted down

 

type Sphere (X, Y, Z, R, L) =

  member this.x = X

  member this.y = Y

  member this.z = Z

  member this.r = R

  member this.l = L

 

type SpheresController() =

  inherit ApiController()   

 

  // Get the cache - this is currently called for each request,

  // it would be better to place this somewhere where it's called

  // only once

 

  let cacheFactory = new DataCacheFactory()

  let cache = cacheFactory.GetDefaultCache()

 

  // A function to help scale our unit result by a given radius

 

  let sphereOfRad rad ((a:float, b:float, c:float, d:float), e:int) =

    new Sphere(a * rad, b * rad, c * rad, d * rad, e)

 

  // Use a unique prefix to help us identify the "spheres" results

  // in our cache (as it's shared across the two services)

 

  let cacheId steps = "Spheres" + steps.ToString();

 

  // GET /api/spheres/rad/steps

 

  member x.Get(rad:double, steps:int) =

 

    // We'll now reject calls for recursion levels above 10

 

    if steps > 10 then

      raise (new HttpResponseException(HttpStatusCode.BadRequest))

    else

 

      // If the cache doesn't contain the unit results we need,

      // generate them

 

      let cached = cache.Get(cacheId steps)

      if cached = null then

        let results =

          SpherePackingInversionFs.Packer.ApollonianGasket

            steps 0.01 false

        try

 

          // And then attempt to add them into the cache, with

          // an expiration time of two weeks hence. Note that we

          // serialize an array: serializing an F# list quickly

          // results in a stack overflow exception

 

          cache.Add(

            cacheId steps,

            results |> List.toArray,

            TimeSpan.FromDays(14.)) |> ignore

        with

          | _ -> ()

 

        // Return the results, scaled by our radius

 

        results |> List.map (sphereOfRad rad) |> List.toSeq

      else

 

        // If the cache contained the required unit results, cast

        // them to the right format and map our scaling function

 

        cached :?> (((float * float * float * float) * int) array) |>

          Array.map (sphereOfRad rad) |> Array.toSeq

We’re choosing to cache our results for 2 weeks: the default is 48 hours, but I decided to extend that. We could also have extended that further, but for now I decided to keep it lower, as that way if I make errors during development the unnecessary entries will go away before too long. :-)

Let’s take a look at the cache usage (via my new favourite online tool, the Windows Azure Management Portal):

Our cache usage

We’re up at about 40 MB: well below our 128 MB quota, and I suspect that’s about twice as big as it needs to be (when I changed the JSON format I also changed the cache prefix identifier, which will have led to the cache getting repopulated when actually it wasn’t needed: we’re not actually storing the JSON in the cache, but an array of F# tuples which then get mapped to JSON after retrieval… so the cache should be back down at around 20 MB in a couple of weeks’ time :-).

A very quick comment on whether this was all worth the effort, or not. I honestly believe so. While I haven’t undergone extensive performance benchmarking of the before and after state, I’ve seen the performance remain quite adequate (this may vary geographically, but spinning up instances in other data-centers would help that).

In this particular case where we have a fairly limited set of outcomes that are used repeatedly to generate results for the service, it makes good sense to implement caching, in this way. And the economics will certainly be favourable should we choose to scale the service upwards, adding additional extra-small VM instances over time to cope with increased demand.

Over the next couple of posts in this series, we’ll look at consuming the data in different ways. We’ll start with the classic AutoCAD client, and then have some real fun looking at a client developed using a game engine. :-)

Update

Thanks to prompting from Michael, I went ahead and rounded off the decimals being returned to the caller to 4 decimal places. It was a simple matter of swapping the circleOfRad and sphereOfRad functions for these implementations:

  let circleOfRad rad ((a:float, b:float, c:float), d:int) =

    new Circle(

      Math.Round(a * rad, 4),

      Math.Round(b * rad, 4),

      Math.Round(c / rad, 4),

      d)

and

  let sphereOfRad rad ((a:float, b:float, c:float, d:float), e:int) =

    new Sphere(

      Math.Round(a * rad, 4),

      Math.Round(b * rad, 4),

      Math.Round(c * rad, 4),

      Math.Round(d * rad, 4),

      e)

Which led to a further reduction of 40-50% in the JSON download. :-)

blog comments powered by Disqus

Feed/Share

10 Random Posts