Through the Interface: AU 2012 Handout: Moving code to the cloud its easier than you think Part 1

May 2015

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30


« AU 2012 Handout: Developing a simple Metro-style application for Windows 8 – Part 2 | Main | AU 2012 Handout: Moving code to the cloud – it’s easier than you think – Part 2 »

November 20, 2012

AU 2012 Handout: Moving code to the cloud – it’s easier than you think – Part 1

After posting the handout for my Wednesday class, now it’s time to start the one for Tuesday's - CP1914 - Moving Code to the Cloud: It's Easier Than You Think (I have a lot else going on on Tuesday, but this is the only class on that day for which I needed to prepare material). Attendance for both classes is looking fairly good: there are currently 138 attendees registered for the Cloud session and 62 registered for the one on WinRT.

Why all this talk of the cloud?

The software industry is steadily adopting a model commonly referred to as “cloud computing”. For many software developers, this will not appear to be anything very new: many have worked with centralized computing resources in the past – in the mainframe era – but that’s not to say that this shift isn’t valid.

So let’s step back and look briefly at how the industry has evolved over recent decades. The mainframe era gave way to personal computing: Autodesk was one of many software companies that identified this trend and rode the wave to become a successfully business. The initial releases of AutoCAD were far from being really useable, in many ways, but it was clear that Moore’s Law – which indicated that the number of transistors on chips would double every 18-24 months – would hold long enough for the various pieces of the puzzle to provide the necessary performance from a personal computer before very long.

Nothing lasts forever, especially if it’s an exponential law. Moore’s Law as we know it has hit a bit of a barrier in recent years: we’re no longer seeing CPU clock speed doubling, for instance, as we’re hitting certain physical laws that prevent this from happening. This article from Herb Sutter does a great job of explaining this in detail.

That said, technology continues to evolve: rather than chips doubling in speed, we’re seeing the number of cores doubling, and the overall computing power that’s available to us via the cloud continuing to grow, too. For more information on this shift, see Herb’s sequel to the above article.

Ultimately, for software developers to continue to see performance gains (and cost efficiencies) software is having to be architected to work in a more distributed manner, with much of its processing performed on the cloud. This work distribution is being made possible via improvements in infrastructure – it’s very common for people to have redundant methods of accessing the Internet, for instance, whether wired or wirelessly – and the cost of centralized resources are continuing to drop as performance between major hosting providers turns computing resources into a commodity (and some would say utility).

At the same time as we’re seeing some kind of plateauing in performance of local, sequential software execution (although note the emphasis on sequential: parallelizing code can still bring performance gains from multi-core systems), we’re seeing a rise in the availability of lower-powered, mobile devices. As we enter the post-PC era, it’s increasingly the desire to be able to access centralized computing resources from devices that are little more than “dumb” terminals (although the modern smartphone contains more computing power than existed on the planet on the day many of their users was born).

And the world is becoming truly heterogeneous in terms of computing devices: over time software developers will decreasingly target specific operating systems, having core algorithms executing centrally. There may still be some amount of native code targeting various supported devices, but even that is likely to reduced as true cross-platform execution improves (whether via toolkits or HTML5).

Moving code to the cloud can make sense

[You may want to skip this section if you remember reading this post.]

So why would you move product functionality to the cloud? Here are some reasons:

  • Performance
    • If you have a problem you can easily chunk up and parallelize – rendering is a great example of this, as we’ve seen with Project Neon(which it seems is now known as Autodesk 360 Rendering) – then the cloud can provide significant value. Renting 1 CPU for 10,000 seconds is (more or less) the same cost as buying 10,000 CPUs for 1 second.
  • Scalability
    • With cloud services you pay for what you use, which should scale linearly with your company’s income (or benefits) from hosting functionality in that way. Dynamic provisioning allows companies to spin up servers to manage usage spikes, too, which allows infrastructure to be made available “just in time” rather than “just in case”.
  • Reliability
    • You often hear about measurements such as “five nines” uptime, which means 99.999% availability (or about 5 minutes of downtime per year). Some providers are no doubt proving better than others at meeting their availability SLAs, but the fact remains: having a local system or server die generally creates more significant downtime than those suffered by the outages suffered by cloud providers. And that should only get better, over time.
  • Low cost
    • As cloud services get increasingly commoditized – and Microsoft, Google and Amazon are competing fiercely in the cloud space, driving costs down further – using the cloud is becoming increasingly cost-effective.

That’s a bit about the “why”, here’s the “when”…

  • Computation intensive
    • If you have serious number crunching going on locally in your desktop apps – which either ties up resources that could be used differently or stops your apps running on lower-spec hardware – then the cloud is likely to look attractive. I mentioned we use make the cloud available for rendering, but we’re doing the same with simulation and analysis, too.
  • Collaboration
    • Imagine implementing the collaboration features of AutoCAD WS without the cloud…
  • Frequent change
    • If you have applications that go through rapid release cycles, then update deployment/patching is likely to be a challenge for you. Hosting capabilities on the cloud – appropriately versioned when you make breaking interface changes, of course – can certainly help address this.
  • Large data sets
    • The ideal scenario is clearly that data is co-located with the processing capability that needs to access it. Much of this data is currently stored on local systems – which makes harnessing the cloud a challenge – but as data shifts to be hosted there (for lots of very good reasons), this starts to become more compelling.
    • Another example: let’s say you have an application that relies on a local database of pricing information. Making sure this database is up-to-date can be a royal pain: it’s little surprise, then, that a number of the early adopters of cloud technology in the design space relate to pricing applications.

These were the main benefits that have been presented to ADN members during DevDays. There are few additional benefits that I’d like to add…

  • Customer intimacy
    • Delivering software as a service can increase the intimacy you have with customers – and with partners, if you’re providing a platform. You have very good knowledge of how your technology is being used – and this has a “Big Brother” flip-side that people often struggle with, as you clearly have to trust your technology provider – which can allow you to provide better service and even anticipate customer needs.
  • Technology abstraction
    • You may have some atypical code that you’d like the user to not have to worry about: let’s say you have some legacy product functionality implemented using Fortran or COBOL that you’d rather not have to provide a local runtime to support. Hiding it behind a web service reduces the complexity in deploying the application and can provide a much cleaner installation/deployment/usage capability.
  • Device support
    • This is probably obvious (as many of the preceding points will have been to some of you, I expect), but web services are accessible from all modern programming languages on any internet-enabled device. Web services are a great way to more quickly support a variety of usages of your application’s capabilities on a variety of devices.

Today’s Example

We’re going to take an example that hits on a few of these topics: we have a core algorithm – implemented in F#, which some might questionably classify as arcane ;-) – that we want to move behind a cloud-hosted web-service and use from a number of different devices.

The algorithm generates Apollonian Gaskets – a 2D fractal, which places as many circles as it can within the “whitespace” inside a circle – and Apollonian Packings – the 3D equivalent which obvious deals with spheres.

We’re going to have fun using this service to generate some interesting 3D visualizations on a variety of platforms.

Choosing a cloud hosting provider

Autodesk is a very heavy user of Amazon Web Services, which might indicate it would be a good, long-term choice for users and developers to adopt (as co-location with data is of benefit, as we’ve seen).

That said, there are lots of factors that contribute to this kind of decision.

The early popularity of AWS was due in large part to its focus on providing Infrastructure-as-a-Service (IaaS): they made it really easy for companies with their own servers to move them across to be hosted centrally. Many companies shifted from on-premise servers (or perhaps their own data-centers) to centrally hosted and managed servers.

Microsoft’s approach has been to deliver highly integrated Platform-as-a-Service (PaaS) offerings: they abstract away the physical machine, focusing on the “roles” that you deploy to the cloud. Microsoft is now starting to deliver via an IaaS model, just as Amazon is providing more by way of PaaS from their side.

In our particular example, we’re going to make use of Windows Azure. That’s not to say it’s better for everyone – it’s just what I’ve chosen to use for this project as the integration with Visual Studio is first-class and I have free hosting provided via my MSDN subscription.

If you’re interested in AWS, I recommend looking at some of the guides on ADN’s Cloud & Mobile DevBlog.

Another option is Google App Engine, which provides an even higher level of abstraction that Azure. It seems to be an excellent system for highly granular, scalable tasks (without even having the underlying concept of physical machines in the picture). If interested in learning more about Google App Engine, I recommend attending tomorrow’s 8am class by my colleague (and manager), Ravi Krishnaswamy:

CP2568 – PaaSt the Desktop: Implementing Cloud-Based Productivity Solutions with the AutoCAD® ObjectARX® API

Architecting for the cloud

There are lots of decisions to be made when considering moving application functionality to the cloud.

Not least of which is “what is your core business logic?” meaning the algorithms that are application- and device-independent. The algorithms that moving away from your core implementation increases your flexibility and platform independence.

You also need to consider the data that needs to be transferred between the client and the cloud: both in terms of the arguments that need to be sent to your cloud-based “function” and the results that need to be brought back down to earth afterwards. Ideally you’d be working with data that’s already hosted in the cloud – and this is likely to happen more and more, over time – but that’s not necessarily where we are today.

You should also think about whether there are optimizations to be made around repeated data: should you be making use of cloud-hosted database storage (which is very cheap when compared with compute resources) or some kind of caching service? In our case we’re going to re-calculate the data, each time, but we could very easily implement a cache of the “unit” results and then multiply those for specifically requested radii.

Another important question is whether offline working needs to be supported: does a local version of the algorithm – or a cache of local data – need to be maintained in order to enable this?

An increasingly easy decision, amongst others that are quite tricky, is how to expose your web-services. These days the commonly accepted approach is to expose RESTful services, which means the transport protocol for data is standard HTTP and (most commonly) any results will be encoded in JSON – JavaScript Object Notation. JSON isn’t actually a required part of REST – it’s also possible for RESTful services to return XML, for instance – but it has become the de-facto approach that is most favored by API consumers.

The previous “standard” was SOAP – Simple Object Access Protocol. SOAP has gradually ceded ground to REST, as it required more effort to create XML envelopes containing the data to transmit to the web-service, and is generally more verbose and requires more bandwidth.

A common requirement is around authentication: you probably want to be able to monitor and control access to your web-services. This is not going to be covered in this class, but you may want to look at OAuth-compatible toolkits such as DotNetOpenAuth, which comes pre-integrated with AS.NET 4.5.

How you end up exposing your RESTful web-services will depend on your choice of technology stack (although these days it should be simple to do so from which choice you make). The Microsoft stack – which would often involve ASP.NET at some level, irrespective of whether you host on AWS or Azure – certainly abstracts away a lot of the messiness with exposing web-services, but comes with a certain execution overhead. If you really want to get “close to the metal” then you might also want to consider a Linux environment: not only do you end up with lower execution overhead but the cost associated with Linux instances can be very interesting. And as we know, the actual implementation of the web-service should be largely irrelevant to the consumer.

For today’s example we’re going to go with Microsoft and choose its ASP.NET MVC 4 Web API. This is a great way to expose web-sites with associated web-services, and seems to be the product of choice for people using the Microsoft stack to expose web-services, these days. WCF, the Windows Communication Framework, provides some very interesting capabilities – especially when needing to marshal more complex data-types to and from web-services – but our requirement is relatively simple and the Web API seems the best fit. The ADN DevBlog mentioned earlier provides some good information on using WCF.

Considering cloud costs

One of the key benefits of the cloud is its ability to scale as you provision more resources to deliver your web-services. The counterpoint is that if you over-estimate the resources required to do this, your costs will be proportionally higher than they need to be.

Companies making heavy use of the cloud tend to invest in tools they can use to scale up and down automatically based on usage. This is not a topic that we’ll cover today, but it’s worth pointing out that getting this right is important for any significant cloud-based deployment.

There are some general things you can do to keep costs in check: consider looking for ways to reduce your instance sizes – dropping from a small to an extra-small instance size can bring significant costs benefits (adding some caching or online database storage might be a way to enable this, helping reduce the processing load).

Online calculators are available to help you determine up-front costs associated with provisioning resources, here are the calculators for Azure and AWS. Be sure to monitor actual costs (and optimize provisioning based on real usage, ideally), to make sure they are in line with projections.

The Problem

Now let’s go back to today’s “problem”. We want to move our business logic – the core F# algorithm used to generate 2D and 3D Apollonian fractals – behind a cloud-based web-service. The original implementation for the 3D packing algorithm was provided in C++ by a Professor of Mathematics at the prestigious ETH in Zurich, but I chosen to migrate the code to F# to see how it looked (and having the code in a language that isn’t necessarily easy to get working on OS X, iOS and Android demonstrates the “technology abstraction” point from an earlier slide nicely.

As mentioned earlier, we’re not going to worry about authenticating users of our web-service: it’s a topic that would probably deserve a class of its own. We’re going to expose a simple, unsecured web-service (at least in terms of the need for authentication to make use of it).

Once it’s up, you’ll be able to query the geometry defining 2D Apollonian Gaskets using the “circles” API:

And 3D Apollonian Packings using the “spheres” API:

2D Apollonian Gasket and 3D Apollonian Packing

Building a simple web-service

We’re going to use the ASP.NET MVC 4 Web API to expose our web-service. MVC – standing for the common “Model View Controller” architectural pattern, which is used to separate model data from UI and interactions – is Microsoft’s technology of choice for defining and hosting web-sites and -services on top of ASP.NET.

We don’t care a great deal about the web-site – we’re much more interested in the web-services – but we’ll go ahead and create one, anyway.

While Windows Azure apparently now supports .NET 4.5, we’re going to stick with .NET 4.0 (at the time of writing this new capability had only just been announced). We’re going to install an F#-aware project template into VS2012 and make use of that to create our Web API project.

Once published to Azure, the code will be hosted and executed on Windows Server 2008 (although this is really a detail – this is not something we should have to worry about at all).

We’ll start by getting our project template installed. We can select it via the “Extensions and Updates” manager on the VS 2012 Tools menu, searching for “F# C# MVC 4”.

Using the F# C# MVC 4 Web API project template
Once installed, we can launch a project of this type and select “WebApi Project”:

The template in action

Visual Studio will go ahead and create our basic project from the template. We can launch the default web-site via the debugger:

Our default web-siteTo test the default implementation of the web-service, try appending the following suffix to the URL:


At this point the browser will ask us whether we want to save or open the results from the web-service. Opening the results in Notepad should show them to be ["value1","value2"].

There are a few changes we’ll make to the project to get it working as we want it.
Firstly, we should change the .NET Framework target to 4.0 from 4.5 for both the contained projects.

Then some changes to the web-site project (ApollonianPackingWebApi).

  • We want to copy across some files from the “ToCopy” folder:
    Site.css into the Content folder
  • Three images into the Images folder (two of which need adding to the project)
  • Index.cshtml into the Views -> Home folder
  • crossdomain.xml into the root folder

Now running the project should look very different (although the .CSS change doesn’t always get picked up if running locally – the background of the “Welcome” area should be orange, but often looks blue before it makes it up to Azure).

There are still some changes needed to allow our service to support “cross domain scripting” (which for us primarily means being callable from client-side HTML5/JavaScript code). The first step was to add the crossdomain.xml file but we also need to open Web.config and add these elements:

Inside <configuration><system.web>:

    <customErrors mode="Off" />

Inside <configuration><system.webServer>:



        <add name="Access-Control-Allow-Origin" value="*" />



And then we should expand the serialization limit beyond the default to make sure it’s large enough for our largest JSON string:




        <jsonSerialization maxJsonLength="500000">         





We can then update the web-services project (ApollonianPackingWebAppApi).

Copy across the various files into the root project folder:

Global.fs will update the existing file. Here are its contents:

namespace FsWeb


open System

open System.Web

open System.Web.Mvc

open System.Web.Routing

open System.Web.Http

open System.Data.Entity

open System.Web.Optimization

open System.Linq

open System.Collections.Generic

open Newtonsoft.Json

open Newtonsoft.Json.Serialization


type OrderedContractResolver() =

  inherit DefaultContractResolver()

  override x.CreateProperties(tp, ms) =

    (base.CreateProperties(tp, ms).OrderBy

      (fun(p) -> p.PropertyName)).ToList() :> IList<JsonProperty>


type BundleConfig() =

  static member RegisterBundles (bundles:BundleCollection) =





























type Route =

  { controller : string

    action : string

    rad : UrlParameter

    steps : UrlParameter }


type ApiRoute =

  { rad : obj

    steps : obj }


type Global() =

  inherit System.Web.HttpApplication()


  static member RegisterGlobalFilters

    (filters:GlobalFilterCollection) =

      filters.Add(new HandleErrorAttribute())


  static member RegisterRoutes(routes:RouteCollection) =






      { rad = RouteParameter.Optional

        steps = RouteParameter.Optional }) |> ignore





      { controller = "Home"

        action = "Index"

        rad = UrlParameter.Optional

        steps = UrlParameter.Optional } )


  member this.Start() =


    // Only support JSON, not XML


    let cfg = GlobalConfiguration.Configuration

    cfg.Formatters.Remove(cfg.Formatters.XmlFormatter) |> ignore


    // Order the JSON fields alphabetically


    let stg = new JsonSerializerSettings()

    stg.ContractResolver <-

      new OrderedContractResolver() :> IContractResolver

    cfg.Formatters.JsonFormatter.SerializerSettings <- stg



    Global.RegisterRoutes RouteTable.Routes |> ignore

    Global.RegisterGlobalFilters GlobalFilters.Filters

    BundleConfig.RegisterBundles BundleTable.Bundles

The other four files need to be added to the project. Two of them, CirclePackingFull.fs and SpherePackingInversion.fs, are copied directly from the previous AutoCAD-hosted version of the application. The other two, CirclesController.fs and SpheresController.fs, implement the logic to pass the API requests through to our core algorithm implementations.

Here’s CirclesController.cs:

namespace FsWeb.Controllers


open System

open System.Web.Http


type Circle (X, Y, C, L) =

  member this.X = X

  member this.Y = Y

  member this.C = C

  member this.L = L


type CirclesController() =

  inherit ApiController()   


  // GET /api/values/rad/steps


  member x.Get(rad:double, steps:int) =

    CirclePackingFullFs.Packer.ApollonianGasket rad steps |>

        (fun ((a,b,c),d) ->

          new Circle

            (Math.Round(a, 4),

             Math.Round(b, 4),

             Math.Round(c, 4), d)) |>


And here’s SpheresController.cs:

namespace FsWeb.Controllers


open System

open System.Web.Http


type Sphere (X, Y, Z, R, L) =

  member this.X = X

  member this.Y = Y

  member this.Z = Z

  member this.R = R

  member this.L = L


type SpheresController() =

  inherit ApiController()   


  // GET /api/values/rad/steps


  member x.Get(rad:double, steps:int) =


      steps 0.01 false |>

        (fun ((a,b,c,d),e) ->

          new Sphere

            (Math.Round(a * rad, 4),

             Math.Round(b * rad, 4),

             Math.Round(c * rad, 4),

             Math.Round(d * rad, 4),


         |> List.toSeq

You can safely remove ValuesController.fs from the project (deleting it from disk, should you so wish).

To test our web-site and -service, we first want to launch it in a browser (most easily via the debugger):Our modified web-site

With the web-site loaded, we can then add these URL suffices into the browser to test the two APIs:

  • /api/circles/2/2
  • /api/spheres/2/2

The first number in each of these URLs specifies the desired radius of the outer circle/sphere to be packed while the second tells the recursion level: how “deep” the fractal should go.

Here are the results of the first of these calls. It should be easy enough to see how the JSON file contains a list of circle definitions, each with X, Y, Curvature and Level values:


Assuming both web-service calls return valid JSON files, we are now ready to publish to Azure.

Which we will look at in the next post. :-)

blog comments powered by Disqus


10 Random Posts