Kean Walmsley


  • About the Author
    Kean on Google+

July 2014

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    








July 18, 2014

The need for autonomous drone navigation

My Quanum Nova arrived from Hong Kong, yesterday. I’d just been tracking the parcel’s progress when the doorbell rang and the postman handed it to me: he seemed surprised I didn’t mind paying the additional SFr 47 customs fee (on top of the $299 price-tag and what I had to pay for batteries) and even let him keep the change. I was in that good a mood. ;-)

DSC09446

Reality gradually set in as I assembled the device, although that was pretty straightforward, overall. The assembly instructions were better than expected, and I was soon able to turn the device on (I’d charged all four of my 2700mAh li-poly batteries in the weeks I’d been waiting for the drone to show up, so I was locked and loaded… interestingly enough, the day before the drone arrived I received a notification that higher-capacity batteries are now available, too).

DSC09452

Unfortunately the instructions for getting the drone’s motors to disarm were much less useable. In theory I was meant to use the left thumbstick on the controller – pushing it down to the bottom-right – to arm the motors. But try as I might I couldn’t get this to work. I wasn’t even sure whether the transmitter was working to control the drone, I really couldn’t get it to respond in any way.

Eventually I decided to connect it up to my PC and see if I could get anything to work using APM Planner 2.0. I was able to connect to the device – as it contains an ArduCopter-compatible board, an Arduino Mega 2560 – and could use APM Planner to adjust some options. Out of desperation – although I’d seen a few people on the forums had done this, themselves – I upgraded the ArduCopter firmware to v3.1.5.

Connecting the APM Planner 2 to my Quanum Nova

This worked, but didn’t help address the connection problem. I then went and re-calibrated the radio – which allowed me to at least see that the controller was indeed connecting to the drone. This calibration did allow the motors to arm: you can imagine my surprise when this happened in my living room (yes, with the propellers attached… who thinks to remove them when there’s every chance your drone’s a dud?) and nearly got tangled up with the USB tether attaching it to my PC.

Calibrating the radio using APM Planner 2

Anyway, at least I knew my drone was functional, at this stage, and that I actually had to hold the right thumbstick downwards, too, for the motors to arm. The instructions could really use having this additional nugget of information provided, but hey.

The next thing I did was – of course – head outside and try an actual flight. First, though, I attached my GoPro-equivalent camera, my CamOne Infinity, to the bottom of the Nova with the provided mount. As far as I’m aware, Photo on ReCap 360 won’t work with stills coming from the Infinity, but this was more to capture the event than to provide data for a 3D reconstruction.

I’m glad I did: the results were hilarious. The drone took off fairly steadily, but I then made the rookie move of switching to what I thought was a stabilized flying mode but ended up sending the drone skywards at high speed. I panicked, tried to adjust and brought the drone back down to earth with a thump. This dislodged the camera, which stayed in the grass filming the departure of the drone as it now – with much less weight – looped back up into the air and crashed elsewhere in the garden.

Here’s the video of the event – I’ve blurred the faces to protect the privacy of a family member, which I’m actually glad has also obscured the look on my face as most of this was playing out.




The drone is, incredibly, largely intact. The upper side of two of the arms have cracked – as has the GPS tower on top – but nothing that can’t be held together with tape. I can’t help feeling that – despite any sense of personal ineptness – this is a fairly typical experience for first-time drone pilots. For me, though, it underscores two things: 1) I did well to start off with a $300 drone and 2) I really need to get to the point where software is taking care of an increasing amount of the navigation effort. If I’d wanted to be a pilot I would have chosen another career, after all. :-)

I’m heading off to Greece on Sunday for a couple of weeks, so it’ll be some time before I blog again. I’ll hopefully be able to tweet once or twice from the beach, but I’m planning to make this a proper break to be with the family. When I get back I’m hoping to spend a bit more time with the Nova, and perhaps even to start working with the new 3DR X8 that our office has just bought, too, which should work really well with the new “one button 3D capture” DroidPlanner functionality.

July 16, 2014

My first Autodesk 360 viewer sample

As mentioned last week, I’ve been having fun with Fusion 360 to prepare a model to be displayed in the new Autodesk 360 viewer. The sample is now ready to view, although I’m not yet quite ready to post the code directly here, mainly because the API isn’t yet publicly usable.

Here’s the app for you to take for a spin, as it were.Steampunk Morgan Viewer

The Autodesk 360 Viewing & Data API is currently being piloted by a few key partners, and hopefully we’ll soon be broadening the scope to allow others to get involved (we first have to iron out any issues that might impact scalability, of course).

But let me give you a few pointers about what I did and what to expect when developing your own web applications that connect with this technology.

Firstly, as a classic set of web-services that require authentication, there’s a need to request and provide an authentication token. You do this by calling a particular web-service API with your client ID and secret: standard authentication stuff. But this means you’re going to need some kind of server-resident code to get a token: it’s a really bad idea to embed your client ID & secret in client-side HTML or JavaScript.

To implement a server-side API, I’ve used Node.js. It’s the first time I’ve done so, but it was really easy to do (and I’d been meaning to give it a try).

For my hosting infrastructure, I went with Heroku (thanks, Cameron!). This is a lightweight, scalable hosting environment that has some really cool integrations: you host your code on GitHub and deploy directly from there to your Heroku instance via the command-line. Integrating Node.js and NewRelic (for application monitoring) took just a few commands and minor code changes. It was really easy to do.

Heroku provides a basic level of usage per application for free – 750 CPU hours annually, I believe. I’m running this app right now on a single instance – it’s really only performing authentication and serving up static HTML, the heavy lifting is done via the code hosted on AWS that feeds data to the connected viewer – but I can scale this up according to usage. It will be interesting to see how the site gets used: NewRelic will provide me with that level of detail, I hope.

I found the Steampunk HTML UI for the sample on this page, making heavy use of CSS transforms. I contacted the author, Dan Wellman, who very generously provided the PSD file he’d used to generate the various web assets. Which meant I didn’t have to tweak the generated files, I could use PhotoShop directly to re-generate my own assets with the required changes. This was really nice of you, Dan – thanks for the kindness!

We’ll take a look at the code in more detail in a future post, as well as the steps to get the model accessible from the embedded viewer. (A quick outline: you need to upload the file – in my case a .F3D archive exported from Fusion 360 – and then fire off a translation request, all using an authorization token generated using your client data, allowing an app using the same client data to access it in future. All this can be done using cURL at the command-line – if doing a one-off translation as I have done – or programmatically if you need something more dynamic.)

In terms of my use of the Viewer API: I had hoped to use component isolation to highlight different parts of the model, but as the original model was imported from Alias with a very flat component hierarchy (with tens of thousands of bodies), there was basically no hope of me doing this without a large team of modelling specialists at my disposal. So I opted for swapping the view when the various buttons are clicked, instead. Which actually works reasonably well, so I’m happy sticking with with this approach, for now.

July 15, 2014

Updates to Memento and ReCap Studio

The Memento team has been beavering away, delivering a number of updates containing some really interesting features that I’ve so far neglected to mention.

In the latest release (hosted on Autodesk Labs on the Autodesk Feedback Community), for instance, you can now access point cloud data via RCP export, as well as being able to generate 3D reconstructions with “Ultra” quality and smart textures (these are features you’d normally have to pay for when using Photo on ReCap 360, but this is being provided for free while it’s still on Autodesk Labs).

Here’s a quick reverse chronological look at recent releases of Project Memento:

  • 1.0.9.2 (7/10/14)
    • ’Ultra’ quality for the photo-to-mesh reconstruction ***
    • ’Smart Texture’ option in the photo-to-mesh reconstruction ***
    • RCP Export
    • Export: Reporting polygon count in the export dialog
    • Export: Decimation control (0-99%) in the export dialog
    • Other minor fixes and general stability
      ***(both Ultra quality mode as well as Smart textures are available at no charge in Memento given that the product is still a Labs product)
  • 1.0.9.1 (6/18/14)
    • Batch auto-fix defects
      • Batch fix holes and islands only, for now. This is, however, a first important step to make the fixing of badly created models that have many errors (meshes that often come from existing laser scans or CAT scan to mesh software packages), much easier and faster
    • New tools in the 3D printing environment
      • Automatic alignment to the base plane at first entering of the 3D print environment (if applicable)
      • Interactive orientation of the model in the 3d print envelope – just use your mouse to orient the model
      • Scale to fit within the printer bounds
    • More reporting tools around 3D printing
      • We can now calculate and report the volume of the print material and the support material used in the printing (the latter, the support material we only report for 3D printers that do full support, such as OBJET). You will notice that for all other printers, the print material data for the support will not be available
  • 1.0.8.0 (5/7/14)
    • Full resolution 3D Print environment
      • You can now select a 3D printer you will be printing on, see the bed size of the relevant printer and how your mesh fits into it, scale the model for printing, hollow, set units etc. – all now natively in the Memento environment – and 3D print your models at best possible resolution that your printer is capable of printing
    • Improved in-canvas visualization of models
    • Resizing brush size now has exposed UI component
      • It’s now easier to change the brush size, which is crucial to successful healing and fixing. Upon selection of the brush tool (right mouse click, Select, Brush) you will now see a slider that controls the brush size. (You can also resize the brush using the brackets [ and ], as well as Alt + mouse wheel.) These are different temporary options that we have now, that are going soon to be consolidated in a final design that will be exposed UI with more sophisticated wheels.
    • We also used the opportunity to fix some reported bugs and continue with performance improvements
    • Fix for the Heartbleed vulnerability

I expect to be using Memento more as soon as my new drone arrives (it’s in the country according to the package tracking, at least!).

Regarding the other product update mentioned in the title, there’s a hotfix posted for ReCap Studio which includes these fixes, according to the release notes:

  • Manual Stitch: improvements to increase general stability
  • Manual Stitch: a magnifier is displayed in the opposite panel to help correctly place the point in the second image
  • Keep dialog open in case of submission error message
  • Sharing option correctly displayed (download and download & update)
  • Allow free resubmission on a failed ultra project

July 10, 2014

Reminder of AU 2014 advance pass availability

A quick reminder – as I already posted on this topic a few months ago – about the “early bird” $600 discount that’s available now – and until August 19th – for this year’s AU. It’s a great way to get first dibs on the classes of your choice, as you get to register a week before people who don’t hold advance passes (such as Autodesk employees, sniff-sniff ;-).

It’s already shaping up to be a hectic AU for me, again: both the classes I submitted got accepted (thanks, developer track leads!) and it turns out I’ll be participating in a couple of others, too. Stephen Preston tells me he won’t be able to make it to AU, this year. It’s the first one he’s missed in a long time, but he’s doing so for good reasons. Hence his request that I host the until-now very popular “Meet the AutoCAD API Experts” panel session. Stephen is a dab hand at hosting this panel – it really won’t be the same without his dry sense of humour – but I’ll do my best to at least partially fill the significant void he’ll leave at this year’s event.

The fourth (and hopefully last) speaking engagement at this year’s AU – although I have no idea which order they’ll actually be scheduled – is to co-speak at a session about the car configurator Autodesk delivered to Morgan for this year’s Geneva Motor Show. I wasn’t involved in the development work for the project, so I’m very happy to be co-speaking on that one (it should be a lot of fun).

That’s it for this week: I’m heading off to Munich tomorrow to participate in this year’s Autodesk Football Tournament (yes, it’s an internal company event, I’m sorry to say for people who have expressed an interest in joining, in the past). Teams from all over the world will be fighting it out, and some players will no doubt stay on to watch Sunday’s World Cup final, too (presumably hoping for a German win, considering the event’s location :-). I’m planning to head back on Sunday to watch the match from home, but we’ll see what happens. I’ll then be back posting again during the course of next week before I head out on vacation for 2-3 weeks, after that.

July 08, 2014

Steampunking a Morgan 3 Wheeler using Fusion 360

My friends in the Autodesk Developer Network team asked me to get involved with creating a sample for the API we’re planning to launch soon for the new Autodesk 360 viewer. They were quite specific about the requirements, which was very helpful: something fun, perhaps with a steampunk theme, that shows some interesting possibilities around both the HTML5 container and the embedded viewer. I was also suggested the Morgan 3 Wheeler as a possible model to look into hosting, so I really didn’t need to be asked twice. ;-)

I started by tracking down a model: I ended up using the Inventor files posted for the Morgan advertisement competition towards the end of last year. The posted archive actually has the originating Alias model inside the ZIP, it turns out, so I went ahead and imported that into Fusion 360.

The model is huge, so this has been a real test of the technology, and Fusion 360 has mostly been up to the challenge. I’m new to Fusion, but the experience has been positive – I’ve had to learn quite a bit as I’ve gone along, but the UI has been intuitive enough. Although I’m really only applying materials to geometry, so nothing very challenging from a hardcore modelling perspective.

Here are a few shots of the model. I really need a “grimy” visual style to go for the full-on steampunk feel, but overall I like the results, so far.

Let’s start with a view of the front:

A view from the front in Fusion 360

Here’s an animation of a few images taken at different stages of applying a material to a set of components. Yes, I know that copper is a really poor choice for a suspension spring – just as brass is less than ideal for an exhaust pipe – but this is all about the look, not at all about the eventual presumed performance. :-)

Making a suspension spring copper using Fusion 360

And here’s an overall view of the car in Fusion 360…

The overall Morgan 3 Wheeler

… as well as one in the Autodesk 360 viewer, although not all the materials appear with full fidelity (and the generic material currently comes through as red, whereas I haven’t started working on the interior, as yet):

Our Fusion 360 model in the Autodesk 360 viewer

Oh, and the RaaS (Rendering as a Service images) look really beautiful, in case you’re wondering about that:

RaaS images of the steampunked Morgan model

Right now I need to work on uploading the model to data service and embedding the viewer into a fun steampunk-themed HTML page I found on the web… I’ll post more about that, in due course.

July 04, 2014

Independence Day

Most of this week I’ve been heads-down trying to finish a sample demonstrating the API that’s coming for the new Autodesk 360 viewer (of which we saw a sneak peek in this recent post), so today I don’t have much to talk about. The sample is pretty cool and involves my favourite car. But hopefully you’ll be able to see it in action before too long, so there’s really no need to say more about it, for now. :-)

A good portion of this blog’s readership is currently grilling meat and celebrating the 4th of July, so I won’t beat myself up too much about not having much to say. (For those of you still reading emails before celebrating, here’s a helpful article on taking photos of fireworks, even with your mobile phone.) Enjoy your long weekend, those of you who have one!

Something I came across, this morning, is the openFrameworks project and, specifically, the ofxPiMapper add-on. This interesting tool allows you to specify portions of an image to project onto surfaces, and handles all the coordinate system transformations.

It seems to be very handy for art installations, in particular. Here’s an example use of this technology, stressing the ability for it to run on a Raspberry Pi, which – while interesting – isn’t the most interesting piece about it, from my perspective:




This has me thinking about how it might be used to display a 3D model – which is then viewable from multiple angles – using a single projector. You could map content to surfaces on three connecting sides of a plain 3D cube, for instance, but that wouldn’t be especially compelling for the viewer. Which then had me thinking about whether you could project onto Plexiglas (as I did for a Halloween illusion a few years ago), to create more of a holographic effect. But I can’t see how that would work without multiple projectors (or at least some mirrors).

What I like most about this, ultimately, is that it’s actually a project about projecting model projections. :-)  We’ll see if it ends up going anywhere, but that’s another question entirely, and one for another day.

July 02, 2014

Embedding a map image in an AutoCAD drawing using .NET

In this recent post we saw how to set our geographic location inside an AutoCAD drawing – essentially geo-referencing the model it contains – programmatically.

In today’s post we’re going to capture a section of the displayed map and embed it inside the drawing, much as the GEOMAPIMAGE command does. In fact, we’re going to use the GEOMAPIMAGE command to do most of the heavy lifting: we’ll simply call the command and then pick up the created GeomapImage object to manipulate its settings, adjusting some image properties (brightness, contrast and fade settings) and having the map image display hybrid (aerial + road) information.

Here’s the C# code implementing the CGI command, which should be run after the IGR command we implemented last time:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

 

namespace GeoLocationAPI

{

  public class Commands

  {

    [CommandMethod("CGI")]

    public void CreateGeoMapImage()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (doc == null)

        return;

      var ed = doc.Editor;

      var db = doc.Database;

 

      // Get the first corner of our area to convert to a

      // GeomapImage

 

      var ppo = new PromptPointOptions("\nSpecify first corner");

      var ppr = ed.GetPoint(ppo);

      if (ppr.Status != PromptStatus.OK)

        return;

 

      var first = ppr.Value;

 

      // And get the second point as a corner (to rubber-band

      // the selection)

 

      var pco =

        new PromptCornerOptions("\nSpecify second corner", first);

      ppr = ed.GetCorner(pco);

 

      if (ppr.Status != PromptStatus.OK)

        return;

 

      var second = ppr.Value;

 

      // We'll use an event handler on the Database to check for

      // GeomapImage entities being added

      // (we'll use a lambda but assigned to a variable to be

      // able to remove it, afterwards)

 

      ObjectId giId = ObjectId.Null;

      ObjectEventHandler handler =

        (s, e) =>

        {

          if (e.DBObject is GeomapImage)

          {

            giId = e.DBObject.ObjectId;

          }

        };

 

      // Simply call the GEOMAPIMAGE command with the two points

 

      db.ObjectAppended += handler;

      ed.Command("_.GEOMAPIMAGE", first, second);

      db.ObjectAppended -= handler;

 

      // Only continue if we've collected a valid ObjectId

 

      if (giId == ObjectId.Null)

        return;

 

      // Open the entity and change some values

 

      try

      {

        using (var tr = doc.TransactionManager.StartTransaction())

        {

          // Get each object and check if it's a GeomapImage

 

          var gi =

            tr.GetObject(giId, OpenMode.ForWrite) as GeomapImage;

          if (gi != null)

          {

            // Let's adjust the brightmess/contrast/fade of the

            // GeomapImage

 

            gi.Brightness = 90;

            gi.Contrast = 40;

            gi.Fade = 20;

 

            // And make sure it's at the right resolution and

            // shows both aerial and road information

 

            gi.Resolution = GeomapResolution.Optimal;

            gi.MapType = GeomapType.Hybrid;

 

            gi.UpdateMapImage(true);

          }

 

          tr.Commit();

        }

      }

      catch (Autodesk.AutoCAD.Runtime.Exception)

      {

        ed.WriteMessage(

          "\nUnable to update geomap image entity." +

          "\nPlease check your internet connectivity and call " +

          "GEOMAPIMAGEUPDATE."

        );

      }

    }

  }

}

Here’s a screencast of these two commands in action:




And here’s an image of the results, should you not want to spend 90s to get to the end of the above video. :-)

A drawing with some map images embedded

June 30, 2014

Autonomous drone navigation for Autodesk Photo on ReCap 360

My new drone: the Quanum Nova from HobbyKing.com

I’ve been getting very interested in the field of autonomous robot navigation, of late.

I own a couple of different robots: while I haven’t quite gotten around to buying a robotic vacuum cleaner, I’ve had an autonomous lawn mower for several years, now, and bought a simple LEGO-carrying programmable robot for my kids for Christmas.

One of the reasons I find the field of autonomous robot navigation so interesting is that there’s a great deal of overlap with the algorithms and techniques needed for 3D reconstruction: robots need to sense their environment – often using photo or video input – and so there’s a great deal of image processing and computer vision involved. These algorithms are used heavily in Photo on ReCap 360, of course.

As mentioned a few times on this blog, I’ve followed a number of online courses to improve my knowledge of autonomous robot navigation. Here’s my “core curriculum”, which admittedly includes a fair amount of overlap (I’ve personally found this helps reinforce some of the core concepts):

With these classes under your belt, you should be feeling pretty good about the basics, having implemented a number of control algorithms in the various simulators provided with these classes. I came away excited about the field but with a strong desire to create something concrete that works on physical hardware.

Inspired by the last class, in particular, I decided on the following project concept: an autopilot for a quadcopter that captures a dataset tuned for use with the Photo on ReCap 360 service (much as we saw before with this drone-captured dataset).

To understand more about the process of capturing a 3D structure using a UAV, I recommend watching this webinar – one of a series I talked about recently – which covers the core concepts and goes into some detail on the conceptual modeling workflow we saw previously as well as the existing, commercial, autonomous drone capture system that connects with the new ReCap Photo Web API:




Assuming you don’t go with Skycatch’s high-end system, most of this workflow is still quite manual: my hope is to make it really easy to take a low-cost UAV and use it to capture a building (for instance) by just dropping it in from of it and have the drone use its sensors to navigate around the building, taking photos at the appropriate lateral and vertical intervals to properly feed Photo on ReCap 360 (integrating the new API, if that makes sense).

It wouldn’t need GPS waypoints to be added via flight-planning software: it would know where it was dropped and stop once it made its way back to the beginning, having completed a loop of the building taking pictures of it at various altitudes.

My next step, then, is to procure some hardware!



I went with the Quanum Nova mainly because it’s dirt cheap (~$300, although you’ll need to buy some batteries, a charger and a GoPro on top of that) and based on an ArduCopter board. ArduCopter is an open source hardware platform for UAVs, which means it’s much more likely to be possible to get in and hack its control algorithms.

I might probably have gone with the AR.Drone 2: the positive side of this drone is that it has a bunch of sensors that make autonomous navigation straightforward – the last course in the above list focuses on AR.Drone as the hardware platform – but the downside is around its ability to capture images: the in-built, downward-facing camera isn’t good enough (or angled appropriately) to feed ReCap and the drone isn’t well-suited to carrying a GoPro.

So I went with the Quanum Nova, even though I’m fairly sure it doesn’t have the sensors I need to detect the distance from the building it’s capturing and to avoid obstacles autonomously.

During my initial research, I posted to the DIY Drones forum and reached out within Autodesk to see what’s going on in this space. The great news is that over the weekend I found out (from a couple of different sources) about work 3D Robotics is doing in this area.

It turns out 3DR has just introduced a “one-button 3D mapping” feature into their DroidPlanner software, an Android-based mission planner they provide to work with their ArduCopter drones. It seems that it’s possible to use DroidPlanner with non-3DR devices, albeit without support. Hopefully I’ll Droid planner flight-plan for autonomous 3D capturefind a way to get it working with my Quanum Nova, once it arrives (it’s been on back-order for 10 days with no ETA), which at a minimum sounds like it’ll involve installing a telemetry module.

The feature works on the basic of specifying an object to capture and a radius to fly around it (I wasn’t able to adjust the radius of the above flight-path, as I haven’t yet connected it to a UAV). The flight-path will keep the UAV pointed towards the centre of the object you’re capturing, of course, and you can specify additional “orbits” at different altitudes, should you so choose. The navigation is based on traditional GPS-waypoints, from what I can tell, as my (still very limited) understanding is that distance sensors are not part of the base ArduCopter system.

So there still seems to be some scope to do something that doesn’t even need a mission-planning application, but for now I’m going to take a look at this very interesting tool and see whether I need to go beyond it. I’m sure I’ll find another drone-related software project to help scratch this particular itch, in case. :-)

And who knows – maybe there’s a scenario that gives me the excuse to connect AutoCAD’s geo-location API into the process? Hmm...

June 27, 2014

ReCap API article on ProgrammableWeb

Just a quick post to finish up the week: over on ProgrammableWeb, there’s a nice article talking about the new ReCap Photo Web API and how a number of developers are making use of it to drive interesting new categories of application, such as products creating designs for custom hearing aids or automating the 4D capture of construction sites. Really cool stuff.

Programmable Web article on ReCap API

We’ll be talking more about web APIs over the coming weeks. Watch this space!

June 26, 2014

Attaching geo-location data to an AutoCAD drawing using .NET

AutoCAD’s geo-location API is a topic I’ve been meaning (and even promising) to cover for some time now. So here we are. :-)

The below code sample is based on one shown at ADN’s DevDays tour at the end of 2013 – for the AutoCAD 2014 release – but the API ended up not being fully usable (at least as far as I recall: someone should jump in and correct me if I have this wrong) until the 2015 release.

I’ve taken the opportunity to use Editor.Command() to call a couple of commands synchronously – to turn on the GEOMAP information and to zoom to a circle that we create around our location – now that this particular API is available.

Here’s the C# code implementing the IGR command:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

 

namespace GeoLocationAPI

{

  public class Commands

  {

    [CommandMethod("IGR")]

    public void InsertGeoRef()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (doc == null)

        return;

      var ed = doc.Editor;

      var db = doc.Database;

      var msId = SymbolUtilityServices.GetBlockModelSpaceId(db);

 

      // Check whether the drawing already has geolocation data

 

      bool hasGeoData = false;

      try

      {

        var gdId = db.GeoDataObject;

        hasGeoData = true;

      }

      catch { }

 

      if (hasGeoData)

      {

        // Report and return: could also open the object for

        // write and modify its properties, of course

 

        ed.WriteMessage("\nDrawing already has geo-location data!");

        return;

      }

 

      // Let's create some geolocation data for this drawing,

      // using a handy method to add it to the modelspace

      // (it gets added to the extension dictionary)

 

      var data = new GeoLocationData();

      data.BlockTableRecordId = msId;

      data.PostToDb();

 

      // We're going to define our geolocation in terms of

      // latitude/longitude using the Mercator projection

      // http://en.wikipedia.org/wiki/Mercator_projection

 

      data.CoordinateSystem = "WORLD-MERCATOR";

      data.TypeOfCoordinates = TypeOfCoordinates.CoordinateTypeGrid;

 

      // Use the lat-long for La Tene, my local "beach"

      // (it's on a lake, after all :-)     

 

      var geoPt = new Point3d(7.019438, 47.005247, 0);

 

      // Transform from a geographic to a modelspace point

      // and add the information to our geolocation data

 

      var wcsPt = data.TransformFromLonLatAlt(geoPt);

      data.DesignPoint = wcsPt;

      data.ReferencePoint = geoPt;

 

      // Let's launch the GEOMAP command to show our geographic

      // overlay

 

      ed.Command("_.GEOMAP", "_AERIAL");

 

      // Now we'll add a circle around our location

      // and that will provide the extents for our zoom

 

      using (var tr = db.TransactionManager.StartTransaction())

      {

        var ms =

          tr.GetObject(msId, OpenMode.ForWrite) as BlockTableRecord;

        if (ms != null)

        {

          // Add a red circle of 7K units radius

          // centred on our point

 

          var circle = new Circle(wcsPt, Vector3d.ZAxis, 7000);

          circle.ColorIndex = 1;

          ms.AppendEntity(circle);

          tr.AddNewlyCreatedDBObject(circle, true);

        }

        tr.Commit();

      }

 

      // And we'll zoom to the circle's extents

 

      ed.Command("_.ZOOM", "_OBJECT", "_L", "");

    }

  }

}

When we run the code, we will see our geographic location gets set to that of La Tene (a place close to my home that I’ve mentioned before) and a circle is created with a radius of 7,000 units (maybe that’s 7km? it looks about right) around our location:

GeoLocation in action

In case you’re wondering, the choice of 7000 was arbitrary: it made the map look good with the fields of oilseed rape at the upper left (setting a larger radius caused a different, less colourful set of imagery to be loaded).

A quick word of warning about the TransformFromLonLatAlt() method: it assumes the Point3d passed in has longitude, latitude, altitude in the x, y, z fields in that order. I made the mistake of copying the lat-long values directly across from Google Maps (not realising I needed long-lat), and found that AutoCAD zoomed into a location about 1km inside Ethiopia’s border with Somalia. Seems like the kind of thing someone could make a fun activity out of: send a letter to the address on the “opposite” side of the planet of your own (except it isn’t opposite, of course, but I honestly don’t know the best way to describe what happens when you swap latitude with longitude).

There’s more I want to do with the geo-location API inside AutoCAD: I want to be able to clip an image and access/modify its properties programmatically, for instance. We’ll take a look at that in an upcoming post. Also, do let me know if you have additional geo-location tasks you’d like to see performed, and I’ll see what’s possible.

10 Random Posts