I’ve been getting very interested in the field of autonomous robot navigation, of late.
I own a couple of different robots: while I haven’t quite gotten around to buying a robotic vacuum cleaner, I’ve had an autonomous lawn mower for several years, now, and bought a simple LEGO-carrying programmable robot for my kids for Christmas.
One of the reasons I find the field of autonomous robot navigation so interesting is that there’s a great deal of overlap with the algorithms and techniques needed for 3D reconstruction: robots need to sense their environment – often using photo or video input – and so there’s a great deal of image processing and computer vision involved. These algorithms are used heavily in Photo on ReCap 360, of course.
As mentioned a few times on this blog, I’ve followed a number of online courses to improve my knowledge of autonomous robot navigation. Here’s my “core curriculum”, which admittedly includes a fair amount of overlap (I’ve personally found this helps reinforce some of the core concepts):
- Coding the Matrix: Linear Algebra through Computer Science Applications
- A solid refresher on linear algebra – very relevant to the subsequent classes
- Cyber-Physical Systems
- A class that looks at issues around embedded system design and implementation
- Autonomous Mobile Robots
- A 15-week class taking a thorough look at the field of autonomous robot navigation
- Autonomous Navigation for Flying Robots
- A shorter 8-week class that covers many of the same concepts in less depth with a particular focus on quadrocopter navigation
With these classes under your belt, you should be feeling pretty good about the basics, having implemented a number of control algorithms in the various simulators provided with these classes. I came away excited about the field but with a strong desire to create something concrete that works on physical hardware.
Inspired by the last class, in particular, I decided on the following project concept: an autopilot for a quadcopter that captures a dataset tuned for use with the Photo on ReCap 360 service (much as we saw before with this drone-captured dataset).
To understand more about the process of capturing a 3D structure using a UAV, I recommend watching this webinar – one of a series I talked about recently – which covers the core concepts and goes into some detail on the conceptual modeling workflow we saw previously as well as the existing, commercial, autonomous drone capture system that connects with the new ReCap Photo Web API:
Assuming you don’t go with Skycatch’s high-end system, most of this workflow is still quite manual: my hope is to make it really easy to take a low-cost UAV and use it to capture a building (for instance) by just dropping it in from of it and have the drone use its sensors to navigate around the building, taking photos at the appropriate lateral and vertical intervals to properly feed Photo on ReCap 360 (integrating the new API, if that makes sense).
It wouldn’t need GPS waypoints to be added via flight-planning software: it would know where it was dropped and stop once it made its way back to the beginning, having completed a loop of the building taking pictures of it at various altitudes.
My next step, then, is to procure some hardware!
So I went and bought a drone. A Quanum Nova, basically an Arducopter equivalent of the DJI Phantom. Importantly it's very hackable...
— Kean Walmsley (@keanw) June 21, 2014
I went with the Quanum Nova mainly because it’s dirt cheap (~$300, although you’ll need to buy some batteries, a charger and a GoPro on top of that) and based on an ArduCopter board. ArduCopter is an open source hardware platform for UAVs, which means it’s much more likely to be possible to get in and hack its control algorithms.
I might probably have gone with the AR.Drone 2: the positive side of this drone is that it has a bunch of sensors that make autonomous navigation straightforward – the last course in the above list focuses on AR.Drone as the hardware platform – but the downside is around its ability to capture images: the in-built, downward-facing camera isn’t good enough (or angled appropriately) to feed ReCap and the drone isn’t well-suited to carrying a GoPro.
So I went with the Quanum Nova, even though I’m fairly sure it doesn’t have the sensors I need to detect the distance from the building it’s capturing and to avoid obstacles autonomously.
During my initial research, I posted to the DIY Drones forum and reached out within Autodesk to see what’s going on in this space. The great news is that over the weekend I found out (from a couple of different sources) about work 3D Robotics is doing in this area.
It turns out 3DR has just introduced a “one-button 3D mapping” feature into their DroidPlanner software, an Android-based mission planner they provide to work with their ArduCopter drones. It seems that it’s possible to use DroidPlanner with non-3DR devices, albeit without support. Hopefully I’ll find a way to get it working with my Quanum Nova, once it arrives (it’s been on back-order for 10 days with no ETA), which at a minimum sounds like it’ll involve installing a telemetry module.
The feature works on the basic of specifying an object to capture and a radius to fly around it (I wasn’t able to adjust the radius of the above flight-path, as I haven’t yet connected it to a UAV). The flight-path will keep the UAV pointed towards the centre of the object you’re capturing, of course, and you can specify additional “orbits” at different altitudes, should you so choose. The navigation is based on traditional GPS-waypoints, from what I can tell, as my (still very limited) understanding is that distance sensors are not part of the base ArduCopter system.
So there still seems to be some scope to do something that doesn’t even need a mission-planning application, but for now I’m going to take a look at this very interesting tool and see whether I need to go beyond it. I’m sure I’ll find another drone-related software project to help scratch this particular itch, in case. :-)
And who knows – maybe there’s a scenario that gives me the excuse to connect AutoCAD’s geo-location API into the process? Hmm...