August 2014

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

August 04, 2014

Our holiday on Andros

The next few posts may seem a bit anomalous – yes, I more commonly write about programming with AutoCAD – but then I’ve been sitting on a variety of beaches on the Greek island of Andros for the best part of the last two weeks, so for now it’s all I have to write about. :-)

In today’s post I’m going to introduce Andros, one of the islands in the Cyclades island chain, while in the next post I’ll talk about its beaches.

Here’s a quick Autodesk-centric anecdote, though. I wore a pair of flip-flops with the Autodesk logo during basically the whole vacation: they started off nice and new, but over the course of the holiday the logo on each was worn away to nothing by my feet (they’re now just plain black). Which I found nice and symbolic: the logos faded as I gradually relaxed and stopped thinking about work (although in reality symbolism has its limits: ubiquitous wi-fi meant I was still on Twitter at least a few times a day…).

I’d last visited a Greek island 25 years ago, long enough ago that I no longer remember which one. Although as I’d reached it by hydrofoil from Piraeus – rather than by ferry from Rafina – it definitely wasn’t Andros.

On this trip we ended up visiting Andros as some good friends of ours in Switzerland – who are Greek and have a family property on Andros – chose it as the place to organise the baptism of their second child (a major event in their culture: congratulations, little Δημήτρης!). It seemed a great opportunity for us to discover the island, so we decided to book 12 nights there, with an additional night’s stay in Athens at each end to break the journeys there and back as well as to give us the chance to climb the Acropolis and walk around the Plaka.

The Parthenon

Getting to Andros was fairly straightforward: the ferry only took two hours to get from Rafina to Gavrio – the primary port of Andros – where we picked up our rental car. Some friends rented cars in Athens rather than on the island, which has its pros and cons (we weren’t sure we would need a car for the full time we spent on the island, but we ended up keeping it for all but the last day, when we chose to visit the beach next to our hotel). Unless you’re going to stay in the same place for your time in Andros – or be reliant on taxis – you’re really going to need a car, though.

View across the island

Andros has a quite rugged beauty, even if it’s apparently quite green compared with other islands in the Cyclades. The island is criss-crossed by these amazing dry stone walls with large slabs placed vertically that are held up by sections with smaller rocks used as bricks.

Dry stone walls

You certainly get the sense that back at the beginning of the 20th century – when the island had twice its current population of 9,000 – the countryside must have been quite something to see, when it was still presumably still viable to cultivate land split across such small terraces.

Driving around the island

Andros’s tourism industry has largely remained domestic: I’ve been told this is because the land-owning Athenian bourgeoisie has blocked attempts to build large-scale hotels there, but whatever the reason it’s certainly true that we were mostly surrounded by Greek tourists. We came across a small number of English- and French-speaking tourists but didn’t hear anyone speaking German or Italian, for instance.

Despite a focus on domestic tourism, road signs and menus are mostly also in English. This helped us a lot, especially to start with, as it does take longer to read most Greek words. A friend who’d already visited Andros had pointed out that it’s worth trying, though: there are lots of words that are familiar, if you already speak a Romance language, as long as you can translate from Greek characters. And if you’ve studied mathematics or physics – or just attended an American University, for that matter – then there’s a good chance you’ve already been exposed to quite a lot of the Greek alphabet.

Andros is often very windy. This has some benefits – we didn’t get bothered at all by mosquitoes while we were there, for instance – but does drive certain behaviours, such as choosing a beach providing more cover for the wind direction on that particular day. We only had one day where the wind caused us to avoid beaches altogether, during our stay.

While short on mosquitoes, Andros does have plenty of bees and produces a great deal of honey made from thyme nectar. It was a great way to start the day, eating Greek yoghurt mixed with local honey, although when eating breakfast outside at our hotel we had to battle some very aggressive wasps for the pleasure of doing so.

Between eating and visiting the beach, there isn’t actually a great deal to do on Andros in the summer: we had planned on renting bicycles or going for hikes, but it’s just too hot and heavy for that. There are a couple of larger towns to visit: we went to Batsi and the island’s capital – which is either known as Andros town or Chora, which appears to be the local term for an island’s capital – but we didn’t spend a great deal of time in either.

Andros town

Thankfully we enjoyed the beach a lot more than expected. I don’t usually enjoy more than a couple of days at the beach, as a rule, but when I think about typical beach holidays I think of the crowded beaches of the south of France… not at all my thing.

My wife and I both enjoy scuba diving – which is something you can do from Andros, too – but as we were with the kids we stuck to snorkeling. We really didn’t see many fish underwater, though: the seas around the coast – even around reefs – were really bare. I hope this wasn’t due to overfishing, but it sadly seems likely to be the case.

We still enjoyed the incredible seafood available on Andros – it’s so rare that we’re near the coast – but I got the sense that most of what was served was brought in from elsewhere.

Grilled calamari

Fishing is just one industry that’s under pressure in Greece, these days, of course. A number of people visiting the baptism (or the hotel we were staying in) from Athens – whether journalists, stockbrokers or civil engineers – were out of work, with little cause for optimism. It’s very sad to see such a beautiful country – and the cradle of Western Civilisation – in the depths of a financial crisis.

I did meet an AutoCAD user while on the beach, though. His business is evolving in new directions due to the state of the economy, but AutoCAD is still a key tool for his company: he even proudly showed me the fact he had AutoCAD 360 on his iPhone and an email from Autodesk in his inbox (I’m less proud of that last point, but hey).

In the next post we’ll take a quick look at my top 5 beaches on Andros (of the 8 that I had the chance to visit), in case you ever get the chance to travel there. In the meantime I’m heading up to hike in the mountains during my last few days of vacation (I’m officially back from next week).

[The good news is that I’ve just received my 3D Robotics Bluetooth Data Link, which should allow me to add telemetry – and properly use DroidPlanner for “one button 3D capture” – with my Quanum Nova. More on that next week, though!]

July 18, 2014

The need for autonomous drone navigation

My Quanum Nova arrived from Hong Kong, yesterday. I’d just been tracking the parcel’s progress when the doorbell rang and the postman handed it to me: he seemed surprised I didn’t mind paying the additional SFr 47 customs fee (on top of the $299 price-tag and what I had to pay for batteries) and even let him keep the change. I was in that good a mood. ;-)


Reality gradually set in as I assembled the device, although that was pretty straightforward, overall. The assembly instructions were better than expected, and I was soon able to turn the device on (I’d charged all four of my 2700mAh li-poly batteries in the weeks I’d been waiting for the drone to show up, so I was locked and loaded… interestingly enough, the day before the drone arrived I received a notification that higher-capacity batteries are now available, too).


Unfortunately the instructions for getting the drone’s motors to disarm were much less useable. In theory I was meant to use the left thumbstick on the controller – pushing it down to the bottom-right – to arm the motors. But try as I might I couldn’t get this to work. I wasn’t even sure whether the transmitter was working to control the drone, I really couldn’t get it to respond in any way.

Eventually I decided to connect it up to my PC and see if I could get anything to work using APM Planner 2.0. I was able to connect to the device – as it contains an ArduCopter-compatible board, an Arduino Mega 2560 – and could use APM Planner to adjust some options. Out of desperation – although I’d seen a few people on the forums had done this, themselves – I upgraded the ArduCopter firmware to v3.1.5.

Connecting the APM Planner 2 to my Quanum Nova

This worked, but didn’t help address the connection problem. I then went and re-calibrated the radio – which allowed me to at least see that the controller was indeed connecting to the drone. This calibration did allow the motors to arm: you can imagine my surprise when this happened in my living room (yes, with the propellers attached… who thinks to remove them when there’s every chance your drone’s a dud?) and nearly got tangled up with the USB tether attaching it to my PC.

Calibrating the radio using APM Planner 2

Anyway, at least I knew my drone was functional, at this stage, and that I actually had to hold the right thumbstick downwards, too, for the motors to arm. The instructions could really use having this additional nugget of information provided, but hey.

The next thing I did was – of course – head outside and try an actual flight. First, though, I attached my GoPro-equivalent camera, my CamOne Infinity, to the bottom of the Nova with the provided mount. As far as I’m aware, Photo on ReCap 360 won’t work with stills coming from the Infinity, but this was more to capture the event than to provide data for a 3D reconstruction.

I’m glad I did: the results were hilarious. The drone took off fairly steadily, but I then made the rookie move of switching to what I thought was a stabilized flying mode but ended up sending the drone skywards at high speed. I panicked, tried to adjust and brought the drone back down to earth with a thump. This dislodged the camera, which stayed in the grass filming the departure of the drone as it now – with much less weight – looped back up into the air and crashed elsewhere in the garden.

Here’s the video of the event – I’ve blurred the faces to protect the privacy of a family member, which I’m actually glad has also obscured the look on my face as most of this was playing out.

The drone is, incredibly, largely intact. The upper side of two of the arms have cracked – as has the GPS tower on top – but nothing that can’t be held together with tape. I can’t help feeling that – despite any sense of personal ineptness – this is a fairly typical experience for first-time drone pilots. For me, though, it underscores two things: 1) I did well to start off with a $300 drone and 2) I really need to get to the point where software is taking care of an increasing amount of the navigation effort. If I’d wanted to be a pilot I would have chosen another career, after all. :-)

I’m heading off to Greece on Sunday for a couple of weeks, so it’ll be some time before I blog again. I’ll hopefully be able to tweet once or twice from the beach, but I’m planning to make this a proper break to be with the family. When I get back I’m hoping to spend a bit more time with the Nova, and perhaps even to start working with the new 3DR X8 that our office has just bought, too, which should work really well with the new “one button 3D capture” DroidPlanner functionality.

July 16, 2014

My first Autodesk 360 viewer sample

As mentioned last week, I’ve been having fun with Fusion 360 to prepare a model to be displayed in the new Autodesk 360 viewer. The sample is now ready to view, although I’m not yet quite ready to post the code directly here, mainly because the API isn’t yet publicly usable.

Here’s the app for you to take for a spin, as it were.Steampunk Morgan Viewer

The Autodesk 360 Viewing & Data API is currently being piloted by a few key partners, and hopefully we’ll soon be broadening the scope to allow others to get involved (we first have to iron out any issues that might impact scalability, of course).

But let me give you a few pointers about what I did and what to expect when developing your own web applications that connect with this technology.

Firstly, as a classic set of web-services that require authentication, there’s a need to request and provide an authentication token. You do this by calling a particular web-service API with your client ID and secret: standard authentication stuff. But this means you’re going to need some kind of server-resident code to get a token: it’s a really bad idea to embed your client ID & secret in client-side HTML or JavaScript.

To implement a server-side API, I’ve used Node.js. It’s the first time I’ve done so, but it was really easy to do (and I’d been meaning to give it a try).

For my hosting infrastructure, I went with Heroku (thanks, Cameron!). This is a lightweight, scalable hosting environment that has some really cool integrations: you host your code on GitHub and deploy directly from there to your Heroku instance via the command-line. Integrating Node.js and NewRelic (for application monitoring) took just a few commands and minor code changes. It was really easy to do.

Heroku provides a basic level of usage per application for free – 750 CPU hours annually, I believe. I’m running this app right now on a single instance – it’s really only performing authentication and serving up static HTML, the heavy lifting is done via the code hosted on AWS that feeds data to the connected viewer – but I can scale this up according to usage. It will be interesting to see how the site gets used: NewRelic will provide me with that level of detail, I hope.

I found the Steampunk HTML UI for the sample on this page, making heavy use of CSS transforms. I contacted the author, Dan Wellman, who very generously provided the PSD file he’d used to generate the various web assets. Which meant I didn’t have to tweak the generated files, I could use PhotoShop directly to re-generate my own assets with the required changes. This was really nice of you, Dan – thanks for the kindness!

We’ll take a look at the code in more detail in a future post, as well as the steps to get the model accessible from the embedded viewer. (A quick outline: you need to upload the file – in my case a .F3D archive exported from Fusion 360 – and then fire off a translation request, all using an authorization token generated using your client data, allowing an app using the same client data to access it in future. All this can be done using cURL at the command-line – if doing a one-off translation as I have done – or programmatically if you need something more dynamic.)

In terms of my use of the Viewer API: I had hoped to use component isolation to highlight different parts of the model, but as the original model was imported from Alias with a very flat component hierarchy (with tens of thousands of bodies), there was basically no hope of me doing this without a large team of modelling specialists at my disposal. So I opted for swapping the view when the various buttons are clicked, instead. Which actually works reasonably well, so I’m happy sticking with with this approach, for now.

July 15, 2014

Updates to Memento and ReCap Studio

The Memento team has been beavering away, delivering a number of updates containing some really interesting features that I’ve so far neglected to mention.

In the latest release (hosted on Autodesk Labs on the Autodesk Feedback Community), for instance, you can now access point cloud data via RCP export, as well as being able to generate 3D reconstructions with “Ultra” quality and smart textures (these are features you’d normally have to pay for when using Photo on ReCap 360, but this is being provided for free while it’s still on Autodesk Labs).

Here’s a quick reverse chronological look at recent releases of Project Memento:

  • (7/10/14)
    • ’Ultra’ quality for the photo-to-mesh reconstruction ***
    • ’Smart Texture’ option in the photo-to-mesh reconstruction ***
    • RCP Export
    • Export: Reporting polygon count in the export dialog
    • Export: Decimation control (0-99%) in the export dialog
    • Other minor fixes and general stability
      ***(both Ultra quality mode as well as Smart textures are available at no charge in Memento given that the product is still a Labs product)
  • (6/18/14)
    • Batch auto-fix defects
      • Batch fix holes and islands only, for now. This is, however, a first important step to make the fixing of badly created models that have many errors (meshes that often come from existing laser scans or CAT scan to mesh software packages), much easier and faster
    • New tools in the 3D printing environment
      • Automatic alignment to the base plane at first entering of the 3D print environment (if applicable)
      • Interactive orientation of the model in the 3d print envelope – just use your mouse to orient the model
      • Scale to fit within the printer bounds
    • More reporting tools around 3D printing
      • We can now calculate and report the volume of the print material and the support material used in the printing (the latter, the support material we only report for 3D printers that do full support, such as OBJET). You will notice that for all other printers, the print material data for the support will not be available
  • (5/7/14)
    • Full resolution 3D Print environment
      • You can now select a 3D printer you will be printing on, see the bed size of the relevant printer and how your mesh fits into it, scale the model for printing, hollow, set units etc. – all now natively in the Memento environment – and 3D print your models at best possible resolution that your printer is capable of printing
    • Improved in-canvas visualization of models
    • Resizing brush size now has exposed UI component
      • It’s now easier to change the brush size, which is crucial to successful healing and fixing. Upon selection of the brush tool (right mouse click, Select, Brush) you will now see a slider that controls the brush size. (You can also resize the brush using the brackets [ and ], as well as Alt + mouse wheel.) These are different temporary options that we have now, that are going soon to be consolidated in a final design that will be exposed UI with more sophisticated wheels.
    • We also used the opportunity to fix some reported bugs and continue with performance improvements
    • Fix for the Heartbleed vulnerability

I expect to be using Memento more as soon as my new drone arrives (it’s in the country according to the package tracking, at least!).

Regarding the other product update mentioned in the title, there’s a hotfix posted for ReCap Studio which includes these fixes, according to the release notes:

  • Manual Stitch: improvements to increase general stability
  • Manual Stitch: a magnifier is displayed in the opposite panel to help correctly place the point in the second image
  • Keep dialog open in case of submission error message
  • Sharing option correctly displayed (download and download & update)
  • Allow free resubmission on a failed ultra project

July 10, 2014

Reminder of AU 2014 advance pass availability

A quick reminder – as I already posted on this topic a few months ago – about the “early bird” $600 discount that’s available now – and until August 19th – for this year’s AU. It’s a great way to get first dibs on the classes of your choice, as you get to register a week before people who don’t hold advance passes (such as Autodesk employees, sniff-sniff ;-).

It’s already shaping up to be a hectic AU for me, again: both the classes I submitted got accepted (thanks, developer track leads!) and it turns out I’ll be participating in a couple of others, too. Stephen Preston tells me he won’t be able to make it to AU, this year. It’s the first one he’s missed in a long time, but he’s doing so for good reasons. Hence his request that I host the until-now very popular “Meet the AutoCAD API Experts” panel session. Stephen is a dab hand at hosting this panel – it really won’t be the same without his dry sense of humour – but I’ll do my best to at least partially fill the significant void he’ll leave at this year’s event.

The fourth (and hopefully last) speaking engagement at this year’s AU – although I have no idea which order they’ll actually be scheduled – is to co-speak at a session about the car configurator Autodesk delivered to Morgan for this year’s Geneva Motor Show. I wasn’t involved in the development work for the project, so I’m very happy to be co-speaking on that one (it should be a lot of fun).

That’s it for this week: I’m heading off to Munich tomorrow to participate in this year’s Autodesk Football Tournament (yes, it’s an internal company event, I’m sorry to say for people who have expressed an interest in joining, in the past). Teams from all over the world will be fighting it out, and some players will no doubt stay on to watch Sunday’s World Cup final, too (presumably hoping for a German win, considering the event’s location :-). I’m planning to head back on Sunday to watch the match from home, but we’ll see what happens. I’ll then be back posting again during the course of next week before I head out on vacation for 2-3 weeks, after that.

July 08, 2014

Steampunking a Morgan 3 Wheeler using Fusion 360

My friends in the Autodesk Developer Network team asked me to get involved with creating a sample for the API we’re planning to launch soon for the new Autodesk 360 viewer. They were quite specific about the requirements, which was very helpful: something fun, perhaps with a steampunk theme, that shows some interesting possibilities around both the HTML5 container and the embedded viewer. I was also suggested the Morgan 3 Wheeler as a possible model to look into hosting, so I really didn’t need to be asked twice. ;-)

I started by tracking down a model: I ended up using the Inventor files posted for the Morgan advertisement competition towards the end of last year. The posted archive actually has the originating Alias model inside the ZIP, it turns out, so I went ahead and imported that into Fusion 360.

The model is huge, so this has been a real test of the technology, and Fusion 360 has mostly been up to the challenge. I’m new to Fusion, but the experience has been positive – I’ve had to learn quite a bit as I’ve gone along, but the UI has been intuitive enough. Although I’m really only applying materials to geometry, so nothing very challenging from a hardcore modelling perspective.

Here are a few shots of the model. I really need a “grimy” visual style to go for the full-on steampunk feel, but overall I like the results, so far.

Let’s start with a view of the front:

A view from the front in Fusion 360

Here’s an animation of a few images taken at different stages of applying a material to a set of components. Yes, I know that copper is a really poor choice for a suspension spring – just as brass is less than ideal for an exhaust pipe – but this is all about the look, not at all about the eventual presumed performance. :-)

Making a suspension spring copper using Fusion 360

And here’s an overall view of the car in Fusion 360…

The overall Morgan 3 Wheeler

… as well as one in the Autodesk 360 viewer, although not all the materials appear with full fidelity (and the generic material currently comes through as red, whereas I haven’t started working on the interior, as yet):

Our Fusion 360 model in the Autodesk 360 viewer

Oh, and the RaaS (Rendering as a Service images) look really beautiful, in case you’re wondering about that:

RaaS images of the steampunked Morgan model

Right now I need to work on uploading the model to data service and embedding the viewer into a fun steampunk-themed HTML page I found on the web… I’ll post more about that, in due course.

July 04, 2014

Independence Day

Most of this week I’ve been heads-down trying to finish a sample demonstrating the API that’s coming for the new Autodesk 360 viewer (of which we saw a sneak peek in this recent post), so today I don’t have much to talk about. The sample is pretty cool and involves my favourite car. But hopefully you’ll be able to see it in action before too long, so there’s really no need to say more about it, for now. :-)

A good portion of this blog’s readership is currently grilling meat and celebrating the 4th of July, so I won’t beat myself up too much about not having much to say. (For those of you still reading emails before celebrating, here’s a helpful article on taking photos of fireworks, even with your mobile phone.) Enjoy your long weekend, those of you who have one!

Something I came across, this morning, is the openFrameworks project and, specifically, the ofxPiMapper add-on. This interesting tool allows you to specify portions of an image to project onto surfaces, and handles all the coordinate system transformations.

It seems to be very handy for art installations, in particular. Here’s an example use of this technology, stressing the ability for it to run on a Raspberry Pi, which – while interesting – isn’t the most interesting piece about it, from my perspective:

This has me thinking about how it might be used to display a 3D model – which is then viewable from multiple angles – using a single projector. You could map content to surfaces on three connecting sides of a plain 3D cube, for instance, but that wouldn’t be especially compelling for the viewer. Which then had me thinking about whether you could project onto Plexiglas (as I did for a Halloween illusion a few years ago), to create more of a holographic effect. But I can’t see how that would work without multiple projectors (or at least some mirrors).

What I like most about this, ultimately, is that it’s actually a project about projecting model projections. :-)  We’ll see if it ends up going anywhere, but that’s another question entirely, and one for another day.

July 02, 2014

Embedding a map image in an AutoCAD drawing using .NET

In this recent post we saw how to set our geographic location inside an AutoCAD drawing – essentially geo-referencing the model it contains – programmatically.

In today’s post we’re going to capture a section of the displayed map and embed it inside the drawing, much as the GEOMAPIMAGE command does. In fact, we’re going to use the GEOMAPIMAGE command to do most of the heavy lifting: we’ll simply call the command and then pick up the created GeomapImage object to manipulate its settings, adjusting some image properties (brightness, contrast and fade settings) and having the map image display hybrid (aerial + road) information.

Here’s the C# code implementing the CGI command, which should be run after the IGR command we implemented last time:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;


namespace GeoLocationAPI


  public class Commands



    public void CreateGeoMapImage()


      var doc = Application.DocumentManager.MdiActiveDocument;

      if (doc == null)


      var ed = doc.Editor;

      var db = doc.Database;


      // Get the first corner of our area to convert to a

      // GeomapImage


      var ppo = new PromptPointOptions("\nSpecify first corner");

      var ppr = ed.GetPoint(ppo);

      if (ppr.Status != PromptStatus.OK)



      var first = ppr.Value;


      // And get the second point as a corner (to rubber-band

      // the selection)


      var pco =

        new PromptCornerOptions("\nSpecify second corner", first);

      ppr = ed.GetCorner(pco);


      if (ppr.Status != PromptStatus.OK)



      var second = ppr.Value;


      // We'll use an event handler on the Database to check for

      // GeomapImage entities being added

      // (we'll use a lambda but assigned to a variable to be

      // able to remove it, afterwards)


      ObjectId giId = ObjectId.Null;

      ObjectEventHandler handler =

        (s, e) =>


          if (e.DBObject is GeomapImage)


            giId = e.DBObject.ObjectId;




      // Simply call the GEOMAPIMAGE command with the two points


      db.ObjectAppended += handler;

      ed.Command("_.GEOMAPIMAGE", first, second);

      db.ObjectAppended -= handler;


      // Only continue if we've collected a valid ObjectId


      if (giId == ObjectId.Null)



      // Open the entity and change some values




        using (var tr = doc.TransactionManager.StartTransaction())


          // Get each object and check if it's a GeomapImage


          var gi =

            tr.GetObject(giId, OpenMode.ForWrite) as GeomapImage;

          if (gi != null)


            // Let's adjust the brightmess/contrast/fade of the

            // GeomapImage


            gi.Brightness = 90;

            gi.Contrast = 40;

            gi.Fade = 20;


            // And make sure it's at the right resolution and

            // shows both aerial and road information


            gi.Resolution = GeomapResolution.Optimal;

            gi.MapType = GeomapType.Hybrid;








      catch (Autodesk.AutoCAD.Runtime.Exception)



          "\nUnable to update geomap image entity." +

          "\nPlease check your internet connectivity and call " +







Here’s a screencast of these two commands in action:

And here’s an image of the results, should you not want to spend 90s to get to the end of the above video. :-)

A drawing with some map images embedded

June 30, 2014

Autonomous drone navigation for Autodesk Photo on ReCap 360

My new drone: the Quanum Nova from

I’ve been getting very interested in the field of autonomous robot navigation, of late.

I own a couple of different robots: while I haven’t quite gotten around to buying a robotic vacuum cleaner, I’ve had an autonomous lawn mower for several years, now, and bought a simple LEGO-carrying programmable robot for my kids for Christmas.

One of the reasons I find the field of autonomous robot navigation so interesting is that there’s a great deal of overlap with the algorithms and techniques needed for 3D reconstruction: robots need to sense their environment – often using photo or video input – and so there’s a great deal of image processing and computer vision involved. These algorithms are used heavily in Photo on ReCap 360, of course.

As mentioned a few times on this blog, I’ve followed a number of online courses to improve my knowledge of autonomous robot navigation. Here’s my “core curriculum”, which admittedly includes a fair amount of overlap (I’ve personally found this helps reinforce some of the core concepts):

With these classes under your belt, you should be feeling pretty good about the basics, having implemented a number of control algorithms in the various simulators provided with these classes. I came away excited about the field but with a strong desire to create something concrete that works on physical hardware.

Inspired by the last class, in particular, I decided on the following project concept: an autopilot for a quadcopter that captures a dataset tuned for use with the Photo on ReCap 360 service (much as we saw before with this drone-captured dataset).

To understand more about the process of capturing a 3D structure using a UAV, I recommend watching this webinar – one of a series I talked about recently – which covers the core concepts and goes into some detail on the conceptual modeling workflow we saw previously as well as the existing, commercial, autonomous drone capture system that connects with the new ReCap Photo Web API:

Assuming you don’t go with Skycatch’s high-end system, most of this workflow is still quite manual: my hope is to make it really easy to take a low-cost UAV and use it to capture a building (for instance) by just dropping it in from of it and have the drone use its sensors to navigate around the building, taking photos at the appropriate lateral and vertical intervals to properly feed Photo on ReCap 360 (integrating the new API, if that makes sense).

It wouldn’t need GPS waypoints to be added via flight-planning software: it would know where it was dropped and stop once it made its way back to the beginning, having completed a loop of the building taking pictures of it at various altitudes.

My next step, then, is to procure some hardware!

I went with the Quanum Nova mainly because it’s dirt cheap (~$300, although you’ll need to buy some batteries, a charger and a GoPro on top of that) and based on an ArduCopter board. ArduCopter is an open source hardware platform for UAVs, which means it’s much more likely to be possible to get in and hack its control algorithms.

I might probably have gone with the AR.Drone 2: the positive side of this drone is that it has a bunch of sensors that make autonomous navigation straightforward – the last course in the above list focuses on AR.Drone as the hardware platform – but the downside is around its ability to capture images: the in-built, downward-facing camera isn’t good enough (or angled appropriately) to feed ReCap and the drone isn’t well-suited to carrying a GoPro.

So I went with the Quanum Nova, even though I’m fairly sure it doesn’t have the sensors I need to detect the distance from the building it’s capturing and to avoid obstacles autonomously.

During my initial research, I posted to the DIY Drones forum and reached out within Autodesk to see what’s going on in this space. The great news is that over the weekend I found out (from a couple of different sources) about work 3D Robotics is doing in this area.

It turns out 3DR has just introduced a “one-button 3D mapping” feature into their DroidPlanner software, an Android-based mission planner they provide to work with their ArduCopter drones. It seems that it’s possible to use DroidPlanner with non-3DR devices, albeit without support. Hopefully I’ll Droid planner flight-plan for autonomous 3D capturefind a way to get it working with my Quanum Nova, once it arrives (it’s been on back-order for 10 days with no ETA), which at a minimum sounds like it’ll involve installing a telemetry module.

The feature works on the basic of specifying an object to capture and a radius to fly around it (I wasn’t able to adjust the radius of the above flight-path, as I haven’t yet connected it to a UAV). The flight-path will keep the UAV pointed towards the centre of the object you’re capturing, of course, and you can specify additional “orbits” at different altitudes, should you so choose. The navigation is based on traditional GPS-waypoints, from what I can tell, as my (still very limited) understanding is that distance sensors are not part of the base ArduCopter system.

So there still seems to be some scope to do something that doesn’t even need a mission-planning application, but for now I’m going to take a look at this very interesting tool and see whether I need to go beyond it. I’m sure I’ll find another drone-related software project to help scratch this particular itch, in case. :-)

And who knows – maybe there’s a scenario that gives me the excuse to connect AutoCAD’s geo-location API into the process? Hmm...

June 27, 2014

ReCap API article on ProgrammableWeb

Just a quick post to finish up the week: over on ProgrammableWeb, there’s a nice article talking about the new ReCap Photo Web API and how a number of developers are making use of it to drive interesting new categories of application, such as products creating designs for custom hearing aids or automating the 4D capture of construction sites. Really cool stuff.

Programmable Web article on ReCap API

We’ll be talking more about web APIs over the coming weeks. Watch this space!


10 Random Posts