Through the Interface

May 2015

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31            



Twitter





May 28, 2015

Signing your application modules for AutoCAD 2016 – Part 1

Padlocks

This series of posts is one I’ve been meaning to write since AutoCAD 2016 started shipping. Thankfully a number of other people have filled the void, in the meantime, so I’ve created an appendix of related posts that you can find at the bottom of each post in this series.

The series is about how we’re working to improve security inside AutoCAD, and what this means for application developers. Dieter’s posts on Lynn’s blog help explain some of the background to this work, much as I’ve posted here in the past, too.

Perhaps the biggest security change in AutoCAD 2016 is around the increased emphasis on program modules being digitally signed. Signing has really become “best practice” for software being deployed to customers, and we’re really encouraging AutoCAD developers to go along this path. Signing tells customers that modules have been created by a trusted source and haven’t been tampered since the moment they were signed.

So how do you sign your program modules? The first step is to get hold of a digital certificate, whether by making one (for testing) or buying one from a reliable vendor such as Symantec (VeriSign), DigiCert, GoDaddy, Thawte or GlobalSign. Make sure you get a code signing certificate that supports Microsoft Authenticode. You should expect to pay around $200-$500 per year for such a certificate, depending on where you get it from. This may seem expensive, but signing is becoming increasingly important to companies and it’s a cost you can amortise across your various applications and customers.

Once you have a certificate, you’ll need to create a PFX file for it: this will make it a lot easier to sign standard OS modules such as .NET DLLs, ARXs, CRXs, DBXs and EXEs. To perform this type of signing you use SignTool.exe, which can be run from a standard command prompt or from a Visual Studio post-build event.

You’ll also want to import the certificate into the Windows Certificate Store: this will allow you to use it to sign AutoLISP files and also to verify the signature of signed modules on your system.

Signing AutoLISP is perhaps even easier than .DLLs, as the app that does it provides a GUI: AcSignApply.exe is found in AutoCAD’s Program Files folder and can be used to sign .LSP (and .FAS, .VLX & .MNL) as well as drawing files and eTransmit archives.

Here’s the UI for this tool:

AcSignApply about to sign a LSP file

There are some “executable” file types that currently can’t be signed, such as .CUI, CUIx, .DVB, .JS, .PGP and .SCR. It’s recommended that these files be placed in read-only locations, as these could otherwise become attack vectors for malicious applications.

In tomorrow’s post we’ll take a look at how AutoCAD behaves when loading signed/unsigned modules, as well as what a signed .LSP looks like.

Appendix

Photo credit: Cadenas via photopin (license)

May 25, 2015

Cooling down after the SF VR Hackathon

The 2nd VR Hackathon, which took place in San Francisco over the weekend, was an absolute blast. It was held at Galvanize, a co-working space about a 15-minute walk from our 1 Market Street office. The venue was great: it had plenty of space but also with a fair amount of natural light (very important for those of us getting over our jetlag).

There were fewer people at this second event – inevitably, as it happened over the Memorial Day weekend – but there was nonetheless a great energy in the room. At the core of our team – which we named “VR Party” – was myself, Lars Schneider and Oleg Dedkow, both of whom flew across from our Potsdam office to participate. We had a few other people express some interest in joining the team, but in the end it was just us – although Jim Quanci lent a hand on the last day with testing and feedback.

The VR Party team, geeking out in VR

Our “hack” – which I talked about previously – was to make VR a collaborative experience: to have someone curate and control the VR session for a number of consumers. Communicating design information is a really important activity for all parts of our industry, and I think VR could well become a great enabling tool.

We ended up with a “presenter” page, which allows you to open and view models via the View & Data API.

The master page

The embedded QR code allows an arbitrary number of people to open up “participant” pages on the devices of their choice (ideally using Google Cardboard to see the page in 3D):

The basic stereo view

All the events you perform – apart from changing the viewpoint, which is something we want controlled locally – get propagated to any of the connected clients via Web Sockets. So if you isolate geometry in the presenter window, all the viewers see the same thing.

Isolating in the masterLeads to all clients having the same geometry isolated

 

 

 

 

The same is true for exploding the model…

And the same with explodeBut you control the camera yourself

 

 

 

 

 

 

… and even for sectioning!

Sectioning was a nice surpriseAnd a great addition

 

 

 

 

The experience was actually really compelling – perhaps even more than expected, in some ways.

We had a bit of a scare as we entered the last hour or so of the competition: we had foolishly introduced some instability, late in the game, as we attempted to crack the issue of communicating zoom level (which is harder than you might think when people are looking at models from arbitrary directions). Thankfully we pulled it all back together in the closing minutes – thanks to some seriously impressive Git repo manipulation from Lars – enough for the demo to blow the socks off the judges, at least.

Which meant we ended up coming away with the award for the “Best Web-based VR Project”:

The category we won

We were thrilled with the result – not even the award, especially, but we felt we came up with something that was actually really useful. The code is all in GitHub but we need to do a little more work – some clean-up but also to add support for multiple sessions – before sharing a link to the live site.

A big shout out to teammates Lars and Oleg (and honorary team member, Jim), as well as to Damon for organising another great hackathon. Can’t wait for the next one!

May 22, 2015

Viewing 50+ design software formats from a web-page

This is very cool. As Stephen Preston has reported, over on the Cloud & Mobile DevBlog, the A360 team has delivered a widget that can be embedded in web-pages and views design files – including DWG files saved from AutoCAD, of course – that are dragged & dropped onto it. Basically allowing you to view them as you would in A360, but inside any web-page.

Instructions are available at 360.autodesk.com/viewer/widget, although – as Stephen notes – be sure to call adskViewerWidget.Init() with a capital “I”.

There are two ways to render the widget. You can either render just the drop area…



… or the full widget:



I'm sure you'll agree this is very handy. Give it a try yourself!

May 20, 2015

An interesting visit to the SVVR exhibition hall

Yesterday I went along with two fellow Autodeskers, Lars and Oleg, to the Silicon Valley Virtual Reality conference at the San Jose Convention Center. As we were only attending the second day – we all flew in on Monday – we just took passes for the exhibition hall rather than the full conference.

People were lining up to get into the exhibition hall as it opened at 11am (it was only open for 4 hours, in total, closing at 3pm). We chose not to join the queue ourselves – we’d bumped into Damon Hernandez and were having too much fun chatting about the AEC and VR Hackathons that he organises – and got into the hall at around 11:15am. Which turned out to be a bit of a mistake: one of the things we really wanted to do was get a demo of the next generation Oculus Rift, codenamed Crescent Bay. And the line for that was already huge.

The queue for Cresent Bay

We did much as a number of others did: we joined the queue and took turns standing in line while the others in our party wandered around the exhibit hall and took in the sights.

The focus of the conference was very much on entertainment – that’s the primary frontier for virtual reality, after all – but I would have hoped to see more technologies targeting design visualization. But it’s to be expected, I suppose: people are inevitably chasing the biggest opportunity. The good news is that what works for immersive, long-term use – e.g. for gamers – will end up being more than good enough for more casual (albeit professional) use, such as for engineers and architects.

Putting with Gear VR and SixenseA few things of note…

It was interesting to learning about OSVR – an open source $200 HMD that is probably about comparable to the DK2 – as well as trying the Samsung Gear VR for Galaxy S6, which is a bit better and lighter than the initial edition for the Note 4.

I used the Gear VR for S6 with to play a pretty decent golf simulation helped by a Sixense wireless controller.

One disappointment was not trying the updated Sony Morpheus: I had tried it late last year, but it has apparently received a big update since then. Maybe they’ll be at this weekend’s Hackathon, which means I’ll get to try it there.

Two of the best discoveries of the day actually came from standing in the Crescent Bay line with the same people for close to 3 hours (on and off). I met Jin, the man behind Project Nourished, which is essentially virtual dining. Yes, you read that right.

Jin

This project is wild: you eat gelatinous lumps that have been flavoured using different algae combinations to match the food you’re eating in the virtual environment. Along with recorded sounds of eating the target foodstuff, it’s apparently it’s enough to fool your brain, allowing you to eat things you may not be able to for health reasons. Crazy!

Project Nourished

The second discovery was also really fortuitous. I was demoing the Cardboard sample using the View & Data API to Jin, but I hadn’t brought along my easily crumpled DODOcase. Someone else in the line – a developer named Derek Chen, who had written a really cool VR game – pulled out a set of Go4D C1-Glass goggles by Goggle Tech. These snapped easily onto my phone and gave great visual quality – super-impressive. Aside from folding up and being very portable, they make it really easy to access the screen to provide touch input and also keep the microphone freely accessible for speech input. All in all a great set of VR goggles: luckily they were also exhibiting at SVVR, so Jin, Oleg and I each went and picked some up.




So after waiting 3 hours, was the Crescent Bay demo impressive? It was – it blew my socks off for the 90 or so seconds I had to try it. The visual quality was simply stunning. But it was tethered, and the recently announced Oculus Rift hardware specification is pretty steep… I still feel that for the design visualization space it’s going to be mobile (and even web-based) VR that’s ultimately going to be most interesting.

May 18, 2015

VRing in SF

I’m on the train to Zurich airport, where I’ll hop on the direct flight to San Francisco. This evening I’m staying in San Jose, as tomorrow I’ll be visiting the SVVR 2015 expo hall to do some research on the latest virtual reality technologies in advance of the coming weekend’s VR Hackathon (following on from the one in October).


VR Hackathon

This Hackathon is set to be really fun: while last time I ended up mostly talking with people about Autodesk and the View & Data API, demoing the Google Cardboard prototype I put together – which was also fun, especially when you’re hanging out with Jim Quanci – this time we’re planning to form an Autodesk team to participate in the event. A couple of colleagues from the InfraWorks team in Potsdam are flying in today, too, and we’ll hopefully be joined by a few locals from the 1 Market office who are interested in web-based VR.

The thinking, right now, is to work on some kind of guided VR experience, where you have one person immersed in a model with another person choosing certain options – whether isolating/exploding geometry or switching to specific locations in a site. This could end up being a really interesting application of VR for our industry: as I’ve lamented, in the past, control is often challenging within VR. Sensing technology – such as Kinect or Leap Motion – is certainly one answer, just as voice recognition technology might be another, but having someone else guide the experience will (I believe) lead to a much more social and – dare I say – intimate experience. Something that could really help in a situation where human relationships are important.

Anyway, we’ll see how it goes – it’ll certainly be fun to explore the possibilities. I expect the weekend to be fairly intense: interestingly Monday is a holiday both in the US (Memorial Day) and in Switzerland (Pentecost), so I’ll be get to decompress for a day before heading back home on Tuesday evening.

May 15, 2015

Creating a rectangular jigsaw puzzle with specific dimensions inside AutoCAD using .NET

Following on from the last post, where we saw an outline for this series of posts on AutoCAD I/O, today’s post adds a command to our jigsaw application that creates the geometry for a jigsaw puzzle of a specified size and with a specified number of pieces.

As jigsaw puzzle pieces are largely quite square, it actually took me some time to get my head around the mathematics needed to calculate the number of pieces we need in each of the X and Y directions to make a puzzle of a certain size. And it’s (with hindsight) obviously not possible to make a square puzzle work with an arbitrary number of pieces, which is why the application asks for an approximate number of pieces and then does its best to meet it.

The approach should be fairly obvious from the code… here’s the new JIGG command in action:


Grid-based jigsaw

At the command-line we see that the puzzle is actually smaller that the proposed 13K pieces, because we couldn’t can’t create a rectangle of that size.

Puzzle will be 147 x 88 (12936 in total).

Here's the C# code:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

using System;

 

namespace JigsawGenerator

{

  public class Commands

  {

    // The WIGL command asks the user to enter this value (which

    // influences the extent of the "wiggle"). For the JIG, JIGG

    // and JIGL commands we just use this hardcoded value.

    // We could certainly ask the user to enter it or get it

    // from a system variable, of course

 

    const double wigFac = 0.8;

 

    // We'll store a central random number generator,

    // which means we'll get more random results

 

    private Random _rnd = null;

 

    // Constructor

 

    public Commands()

    {

      _rnd = new Random();

    }

 

    [CommandMethod("JIG")]

    public void JigEntity()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (null == doc)

        return;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Select our entity to create a tab for

 

      var peo = new PromptEntityOptions("\nSelect entity to jig");

      peo.SetRejectMessage("\nEntity must be a curve.");

      peo.AddAllowedClass(typeof(Curve), false);

 

      var per = ed.GetEntity(peo);

      if (per.Status != PromptStatus.OK)

        return;

 

      // We'll ask the user to select intersecting/delimiting

      // entities: if they choose none we use the whole length

 

      ed.WriteMessage(

        "\nSelect intersecting entities. " +

        "Hit enter to use whole entity."

      );

 

      var pso = new PromptSelectionOptions();

      var psr = ed.GetSelection();

      if (

        psr.Status != PromptStatus.OK &&

        psr.Status != PromptStatus.Error // No selection

      )

        return;

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        // Open our main curve

 

        var cur =

          tr.GetObject(per.ObjectId, OpenMode.ForRead) as Curve;

 

        double start = 0, end = 0;

        bool bounded = false;

 

        if (cur != null)

        {

          // We'll collect the intersections, if we have

          // delimiting entities selected

 

          var pts = new Point3dCollection();

 

          if (psr.Value != null)

          {

            // Loop through and collect the intersections

 

            foreach (var id in psr.Value.GetObjectIds())

            {

              var ent = (Entity)tr.GetObject(id, OpenMode.ForRead);

 

              cur.IntersectWith(

                ent,

                Intersect.OnBothOperands,

                pts,

                IntPtr.Zero,

                IntPtr.Zero

              );

            }

          }

 

          ed.WriteMessage(

            "\nFound {0} intersection points.", pts.Count

          );

 

          // If we have no intersections, use the start and end

          // points

 

          if (pts.Count == 0)

          {

            start = cur.StartParam;

            end = cur.EndParam;

            pts.Add(cur.StartPoint);

            pts.Add(cur.EndPoint);

            bounded = true;

          }

          else if (pts.Count == 2)

          {

            start = cur.GetParameterAtPoint(pts[0]);

            end = cur.GetParameterAtPoint(pts[1]);

            bounded = true;

          }

 

          // If we have a bounded length, create our tab in a random

          // direction

 

          if (bounded)

          {

            var left = _rnd.NextDouble() >= 0.5;

 

            var sp = CreateTab(cur, start, end, pts, left);

 

            var btr =

              (BlockTableRecord)tr.GetObject(

                SymbolUtilityServices.GetBlockModelSpaceId(db),

                OpenMode.ForWrite

              );

            btr.AppendEntity(sp);

            tr.AddNewlyCreatedDBObject(sp, true);

          }

        }

 

        tr.Commit();

      }

    }

 

    [CommandMethod("JIGL")]

    public void JigLines()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (null == doc)

        return;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Here we're going to get a selection set, but only care

      // about lines

 

      var psr = ed.GetSelection();

      if (psr.Status != PromptStatus.OK)

        return;

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        var btr =

          (BlockTableRecord)tr.GetObject(

            SymbolUtilityServices.GetBlockModelSpaceId(db),

            OpenMode.ForWrite

          );

 

        // We'll be generating random numbers to decide direction

        // for each tab

 

        foreach (var id in psr.Value.GetObjectIds())

        {

          // We only care about lines

 

          var ln = tr.GetObject(id, OpenMode.ForRead) as Line;

          if (ln != null)

          {

            // Get the start and end points in a collection

 

            var pts =

              new Point3dCollection(

                new Point3d[] {

                  ln.StartPoint,

                  ln.EndPoint

                }

              );

 

            // Decide the direction (randomly) then create the tab

 

            var left = _rnd.NextDouble() >= 0.5;

            var sp =

              CreateTab(ln, ln.StartParam, ln.EndParam, pts, left);

 

            btr.AppendEntity(sp);

            tr.AddNewlyCreatedDBObject(sp, true);

          }

        }

        tr.Commit();

      }

    }

 

    [CommandMethod("JIGG")]

    public void JigGrid()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (null == doc)

        return;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Get overall dimensions of the puzzle

 

      var pdo = new PromptDoubleOptions("\nEnter puzzle width");

      pdo.AllowNegative = false;

      pdo.AllowNone = false;

      pdo.AllowZero = false;

 

      var pdr = ed.GetDouble(pdo);

      if (pdr.Status != PromptStatus.OK)

        return;

 

      var width = pdr.Value;

 

      pdo.Message = "\nEnter puzzle height";

      pdr = ed.GetDouble(pdo);

      if (pdr.Status != PromptStatus.OK)

        return;

 

      var height = pdr.Value;

 

      // Get the (approximate) number of pieces

 

      var pio =

        new PromptIntegerOptions("\nApproximate number of pieces");

      pio.AllowNegative = false;

      pio.AllowNone = false;

      pio.AllowZero = false;

 

      var pir = ed.GetInteger(pio);

      if (pir.Status != PromptStatus.OK)

        return;

 

      var pieces = pir.Value;

 

      var aspect = height / width;

      var piecesY = Math.Floor(Math.Sqrt(aspect * pieces));

      var piecesX = Math.Floor(pieces / piecesY);

 

      ed.WriteMessage(

        "\nPuzzle will be {0} x {1} ({2} in total).",

        piecesX, piecesY, piecesX * piecesY

      );

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        var btr =

          (BlockTableRecord)tr.GetObject(

            SymbolUtilityServices.GetBlockModelSpaceId(db),

            OpenMode.ForWrite

          );

 

        var incX = width / piecesX;

        var incY = height / piecesY;

        var tol = Tolerance.Global.EqualPoint;

 

        for (double x = 0; x < width - tol; x += incX)

        {

          for (double y = 0; y < height - tol; y += incY)

          {

            var nextX = x + incX;

            var nextY = y + incY;

 

            // At each point in the grid - apart from when along

            // the axes - we're going to create two lines, one

            // in the X direction and one in the Y (along the axes

            // we'll usually be creating one or the other, unless

            // at the origin :-)

 

            if (y > 0)

            {

              var sp =

                CreateTabFromPoints(

                  new Point3d(x, y, 0),

                  new Point3d(nextX, y, 0)

                );

              btr.AppendEntity(sp);

              tr.AddNewlyCreatedDBObject(sp, true);

            }

 

            if (x > 0)

            {

              var sp =

                CreateTabFromPoints(

                  new Point3d(x, y, 0),

                  new Point3d(x, nextY, 0)

                );

              btr.AppendEntity(sp);

              tr.AddNewlyCreatedDBObject(sp, true);

            }

          }

        }

 

        // Create the puzzle border as a closed polyline

 

        var pl = new Polyline(4);

        pl.AddVertexAt(0, Point2d.Origin, 0, 0, 0);

        pl.AddVertexAt(1, new Point2d(width, 0), 0, 0, 0);

        pl.AddVertexAt(2, new Point2d(width, height), 0, 0, 0);

        pl.AddVertexAt(3, new Point2d(0, height), 0, 0, 0);

        pl.Closed = true;

 

        btr.AppendEntity(pl);

        tr.AddNewlyCreatedDBObject(pl, true);

 

        tr.Commit();

      }

    }

 

    private Curve CreateTabFromPoints(Point3d start, Point3d end)

    {

      using (var ln = new Line(start, end))

      {

        // Get the start and end points in a collection

 

        var pts =

          new Point3dCollection(new Point3d[] { start, end });

 

        // Decide the direction (randomly) then create the tab

 

        var left = _rnd.NextDouble() >= 0.5;

 

        return CreateTab(ln, ln.StartParam, ln.EndParam, pts, left);

      }

    }

 

    [CommandMethod("WIGL")]

    public void AdjustTabs()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (null == doc)

        return;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Here we're going to get a selection set, but only care

      // about splines

 

      var pso = new PromptSelectionOptions();

      var psr = ed.GetSelection();

      if (psr.Status != PromptStatus.OK)

        return;

 

      var pdo = new PromptDoubleOptions("\nEnter wiggle factor");

      pdo.DefaultValue = 0.8;

      pdo.UseDefaultValue = true;

      pdo.AllowNegative = false;

      pdo.AllowZero = false;

 

      var pdr = ed.GetDouble(pdo);

      if (pdr.Status != PromptStatus.OK)

        return;

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        foreach (var id in psr.Value.GetObjectIds())

        {

          // We only care about splines

 

          var sp = tr.GetObject(id, OpenMode.ForRead) as Spline;

          if (sp != null && sp.NumFitPoints == 6)

          {

            // Collect the fit points

 

            var pts = sp.FitData.GetFitPoints();

 

            // Adjust them

 

            AddWiggle(pts, pdr.Value);

 

            // Set back the top points to the spline

            // (we know these are the ones that have changed)

 

            sp.UpgradeOpen();

 

            sp.SetFitPointAt(2, pts[2]);

            sp.SetFitPointAt(3, pts[3]);

          }

        }

        tr.Commit();

      }

    }

 

    private Curve CreateTab(

      Curve cur, double start, double end, Point3dCollection pts,

      bool left = true

    )

    {

      // Calculate the length of this curve (or section)

 

      var len =

        Math.Abs(

          cur.GetDistanceAtParameter(end) -

          cur.GetDistanceAtParameter(start)

        );

 

      // We're calculating a random delta to adjust the location

      // of the tab along the length

 

      double delta = 0.01 * len * (_rnd.NextDouble() - 0.5);

 

      // We're going to offset to the side of the core curve for

      // the tab points. This is currently a fixed tab size

      // (could also make this proportional to the curve)

 

      double off = 0.2 * len; // was 0.5

      double fac = 0.5 * (len - 0.5 * off) / len;

      if (left) off = -off;

 

      // Get the next parameter along the length of the curve

      // and add the point associated with it into our fit points

 

      var nxtParam = start + (end - start) * (fac + delta);

      var nxt = cur.GetPointAtParameter(nxtParam);

      pts.Insert(1, nxt);

 

      // Get the direction vector of the curve

 

      var vec = pts[1] - pts[0];

 

      // Rotate it by 90 degrees in the direction we chose,

      // then normalise it and use it to calculate the location

      // of the next point

 

      vec = vec.RotateBy(Math.PI * 0.5, Vector3d.ZAxis);

      vec = off * vec / vec.Length;

      pts.Insert(2, nxt + vec);

 

      // Now we calculate the mirror points to complete the

      // splines definition

 

      nxtParam = end - (end - start) * (fac - delta);

      nxt = cur.GetPointAtParameter(nxtParam);

      pts.Insert(3, nxt + vec);

      pts.Insert(4, nxt);

 

      AddWiggle(pts, wigFac);

 

      // Finally we create our spline

 

      return new Spline(pts, 1, 0);

    }

 

    private void AddWiggle(Point3dCollection pts, double fac)

    {

      const double rebase = 0.3;

 

      // Works on sets of six points only

      //

      //             2--------3

      //             |        |

      //             |        |

      // 0-----------1        4-----------5

 

      if (pts.Count != 6)

        return;

 

      // Our spline's direction, tab width and perpendicular vector

 

      var dir = pts[5] - pts[0];

      dir = dir / dir.Length;

      var tab = (pts[4] - pts[1]).Length;

      var cross = dir.RotateBy(Math.PI * 0.5, Vector3d.ZAxis);

      cross = cross / cross.Length;

 

      // Adjust the "top left" and "top right" points outwards,

      // multiplying by fac1 and the random factor (0-1) brought

      // back towards -0.5 to 0.5 by fac2

 

      pts[2] =

        pts[2]

        - (dir * tab * fac * (_rnd.NextDouble() - rebase))

        + (cross * tab * fac * (_rnd.NextDouble() - rebase));

      pts[3] =

        pts[3]

        + (dir * tab * fac * (_rnd.NextDouble() - rebase))

        + (cross * tab * fac * (_rnd.NextDouble() - rebase));

    }

  }

}

At some point this application will need to take some additional input – we’re going to want to create an engraving layer, displaying a simplified version of a picture or photo – but that’s for a future post.

May 13, 2015

AutoCAD I/O and custom applications

AutoCAD IOThe title of this post is probably a bit misleading: I’m not actually going to show how this works, today, but I do intend to plot a path for addressing this topic over the coming weeks.

I was spurred on by a tweet I received a couple of hours ago:



The short answer to this is “yes, it’s absolutely possible!”. But readers of this blog are clearly interested in details, so that’s where I want to get to.

In the last post I mentioned the two class proposals I’ve submitted for AU2015: one deals with VR and the other with AutoCAD I/O. The second class will focus, in particular, on how to leverage your own .NET modules in an AutoCAD I/O application. The class will be fuelled, as usual, by a series of posts I publish on this blog.

This is what I have in mind, specifically: you may remember the jigsaw generation application we saw a few weeks ago. I want to extend this app, implementing a command that will generate – from scratch, no existing geometry up our sleeves – a jigsaw of a given physical size and of an approximate number of pieces. That’s code I’m going to show in the next post.

I then want to build a web-site around this core AutoCAD app. This will involve making sure the module works in the Core Console – which is the component driving AutoCAD I/O – and then that we can create an app package for AutoCAD I/O to load the module, allowing us to execute scripts to generate jigsaws in the cloud.

The HTML page is going to do a bit more than just allowing the user to enter the dimensions and number of pieces, though: I’d like the user to be able to upload a custom image (or photo) via the site, and we’ll then create a drawing that can be used to laser-cut the puzzle but also to engrave the image onto it. Laser-engraving a full colour image is (as far as I’m aware) impossible, so this is going to be a little tricky. The idea is to use some image processing/computer vision techniques to create a monochrome pattern to be engraved: we’ll do some edge detection – either in the browser or in the cloud, we’ll see – and then propose that to the user in some way before they click on “generate my puzzle”. And “here are my credit card details”, of course. ;-)

The eventual aim would be to automate the preparation and dispatch of the physical puzzle, but that’s a bit out of the scope for an AU session, where we’ll probably just take a look at the resultant DWG file.

But the point is this: we could be using a .NET application inside AutoCAD I/O to do much of the heavy-lifting for this web-site, and the user will never even know. They won’t see a DWG file and they may not even see much by way of graphics (although I expect we’ll generate an image preview to show before the purchase is finalised).

This is exactly the kind of scenario we expect to see AutoCAD I/O used for, over time. And the great opportunity, here, is that you can make use of your existing .NET application code to take advantage of it.

I think this is pretty exciting… we’ll see how it all takes shape over the coming weeks!

May 11, 2015

Autodesk University 2015 class proposals

AU2015

A reminder that proposals are open for AU2015 until May 26th. I’ve just submitted two, myself. Of the three topics I had in mind – relating to VR, AutoCAD I/O and TypeScript – I decided to submit proposals on the first two: I’ll do my best to use TypeScript for one or both of the other two (assuming they get accepted) which will at least give people some exposure to how the technology works. And give some good fodder for blog posts, of course.

Here are the abstracts I submitted:

Virtual Reality viewing of 3D models using Autodesk's View and Data API

2015 is the year in which Virtual Reality is finally ready for the masses: Google Cardboard was perfect for democratizing VR, while products like Samsung Gear VR have helped increase the quality for scenarios that require it. Importantly, both technologies are powered by mobile phones.

This session looks at the steps to create web-based VR for design visualization using Autodesk’s View and Data API, and how this core implementation can be extended via native Android SDKs for both Google Cardboard and Samsung Gear VR.

We will also spend time looking at Augmented Reality technologies such as Magic Leap and Microsoft HoloLens to understand the implications for design and engineering.

This first class is largely already written – I presented an early draft a few weeks ago in Singapore – apart from the piece in the last paragraph related to AR. Hopefully I’ll be able to demo something related to HoloLens in December, but it’s a bit too soon to say, at this stage.

The second class is going to take some more work, but I’m pretty sure it’ll come together nicely.

Integrating .NET code with AutoCAD I/O to add design intelligence to your web-site

Over the last decade or so, software developers have amassed a significant amount of intellectual property harnessing AutoCAD's .NET API. AutoCAD I/O allows standard AutoCAD commands as well as those implemented in .NET to be executed in the cloud, generating results that can be integrated into your own B2B or B2C web-site.

This class takes a concrete example of a .NET application creating custom jigsaw puzzles inside AutoCAD. During the class we show how to move the core implementation to AutoCAD I/O via the Core Console, and then make use of this to power a new B2C web-site. Potential customers will be able to specify custom designs for jigsaw puzzles and visualize the results before finalizing their orders.

That’s what I’m hoping to present at this year’s AU. If you’re interested in submitting your own class but would like some free* advice, feel free to drop me a quick email. I’d be happy to review your proposal and provide feedback. Although please do so soon, as I’m travelling to the US for the 10 days prior to the deadline and may not have much time during that period to read and respond to email.

(* An optional fee may get applied: the going rate is one beer somewhere in the vicinity of The Venetian per proposal reviewed. This fee is only applicable for proposals that get accepted, of course. ;-)

May 07, 2015

Charity begins… in the office?

In the past I’ve mentioned the Autodesk Foundation and to some extent the focus Autodesk has on benefiting local communities. Today I was very happy to participate in a charitable teambuilding activity in the office: a group of volunteers spent a couple of hours building prosthetic hands for people in developing countries – very often children who have lost their hands due to landmines.




The instructions were straightforward to follow: there were a few tricky parts, but we managed to work through them as a team.

Checking the piecesAssembling the handMostly doneAdding the straps

The resulting hands will be shipped back to Odyssey Teams in the US, from where they’ll be distributed around the world. Hopefully we’ll hear back, at some point, from the person who has benefitted from it – part of the process is to sign a card and provide contact details for the eventual recipient.

The finished product

Thanks to my teammates, Anna, Simona and John, and to our Autodesk Foundation representative, Claudio Ombrella, for organising this event. (If you’re interested in participating in your own company, I thoroughly recommend it!)

Onto a (slightly) related topic, in that it regards another worthy cause…

Autodesk is sponsoring an event in San Francisco next Wednesday – on May 13, 2015 – with the entire proceeds going to support the community-based, non-profit Roxie Theater which – at 106 years old – is the city’s longest running cinema. The other sponsors are Goo Technologies, Leap Motion and the Khronos Group.

Roxie Theater in SF

The event is the 3D Web Fest, a celebration of web-based 3D content. It’ll be a bit like a film festival but focusing on the best of the 3D Web. Here’s some information about the event:

This event experience will showcase websites that are the best mixture of music, art and technology, and exhibit what's possible with the advent of 3D Web in a film festival format and club atmosphere. The Web has transitioned from mostly text to an increasingly rich 2D image environment, and now the shift to 3D has arrived enabled by new Web standards and open source technologies. The 3D Web Fest brings together the best of the 3D Web-presented live-amazing, delightful, surprising. By immersion in the world of 3D Web you'll experience what's possible for innovative commercial, non-commercial, experimental or game-based projects.

At the time of writing, there are still tickets available. If you’re in the Bay Area next week, check it out!




photo credit: Post @sfembassy scheming with @conniehwong via photopin (license)

May 06, 2015

Jigging an AutoCAD block with attributes using .NET (redux)

Our old friend Roland Feletic emailed me last week. He’d been having some trouble with this previous post when jigging blocks with multiline attributes. Roland had also identified some code in this post on another blog which worked properly for him.

I spent some time looking into what was wrong with the original post. It certainly didn’t deal with the appropriate placement of multiline text, and didn’t take proper care of annotation scaling and UCS. Time for a do-over. :-)

The following C# code is a combination of the code from the previous post and the approach spiderinnet1 took in their own implementation. I didn’t adopt everything they’d done on their side, however: this jig is about simple placement – I didn’t extend it to worry about rotation and scaling – and I chose not to implement certain performance optimizations – their code keeps attribute references and their definitions open inside a hash table during the jigs operation, while I preferred to incur the modest overhead of opening them from ObjectIds, when needed.

So there are still a few differences in the two implementations – be sure to head over and check spiderinnet1’s if you’re interested in their implementation.

Here’s the C# code:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

using System.Collections.Generic;

 

namespace BlockJigApplication

{

  class BlockJig : EntityJig

  {

    // Member variables

 

    private Matrix3d _ucs;

    private Point3d _pos;

    private Dictionary<ObjectId, ObjectId> _atts;

    private Transaction _tr;

 

    // Constructor

 

    public BlockJig(

      Matrix3d ucs,

      Transaction tr,

      BlockReference br,

      Dictionary<ObjectId, ObjectId> atts

    ) : base(br)

    {

      _ucs = ucs;

      _pos = br.Position;

      _atts = atts;

      _tr = tr;

    }

 

    protected override bool Update()

    {

      var br = (BlockReference)Entity;

 

      // Transform to the current UCS

 

      br.Position = _pos.TransformBy(_ucs);

 

      if (br.AttributeCollection.Count > 0)

      {

        foreach (ObjectId id in br.AttributeCollection)

        {

          var obj = _tr.GetObject(id, OpenMode.ForRead);

          var ar = obj as AttributeReference;

 

          if (ar != null)

          {

            ar.UpgradeOpen();

 

            // Open the associated attribute definition

 

            var defId = _atts[ar.ObjectId];

            var obj2 = _tr.GetObject(defId, OpenMode.ForRead);

            var ad = (AttributeDefinition)obj2;

 

            // Use it to set positional information on the

            // reference

 

            ar.SetAttributeFromBlock(ad, br.BlockTransform);

            ar.AdjustAlignment(br.Database);

          }

        }

      }

      return true;

    }

 

    protected override SamplerStatus Sampler(JigPrompts prompts)

    {

      var opts =

        new JigPromptPointOptions("\nSelect insertion point:");

      opts.BasePoint = Point3d.Origin;

      opts.UserInputControls =

        UserInputControls.NoZeroResponseAccepted;

 

      var ppr = prompts.AcquirePoint(opts);

      var ucsPt = ppr.Value.TransformBy(_ucs.Inverse());

      if (_pos == ucsPt)

        return SamplerStatus.NoChange;

 

      _pos = ucsPt;

 

      return SamplerStatus.OK;

    }

 

    public PromptStatus Run()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (doc == null)

        return PromptStatus.Error;

 

      return doc.Editor.Drag(this).Status;

    }

  }

 

  public class Commands

  {

    const string annoScalesDict = "ACDB_ANNOTATIONSCALES";

 

    [CommandMethod("BJ")]

    static public void BlockJigCmd()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      var db = doc.Database;

      var ed = doc.Editor;

 

      var pso = new PromptStringOptions("\nEnter block name: ");

      var pr = ed.GetString(pso);

 

      if (pr.Status != PromptStatus.OK)

        return;

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        var bt =

          (BlockTable)tr.GetObject(

            db.BlockTableId,

            OpenMode.ForRead

          );

 

        if (!bt.Has(pr.StringResult))

        {

          ed.WriteMessage(

            "\nBlock \"" + pr.StringResult + "\" not found.");

          return;

        }

 

        var ms =

          (BlockTableRecord)tr.GetObject(

            db.CurrentSpaceId,

            OpenMode.ForWrite

          );

 

        var btr =

          (BlockTableRecord)tr.GetObject(

            bt[pr.StringResult],

            OpenMode.ForRead

          );

 

        // Block needs to be inserted to current space before

        // being able to append attribute to it

 

        var br = new BlockReference(new Point3d(), btr.ObjectId);

        br.TransformBy(ed.CurrentUserCoordinateSystem);

 

        ms.AppendEntity(br);

        tr.AddNewlyCreatedDBObject(br, true);

 

        if (btr.Annotative == AnnotativeStates.True)

        {

          var ocm = db.ObjectContextManager;

          var occ = ocm.GetContextCollection(annoScalesDict);

          br.AddContext(occ.CurrentContext);

        }

        else

        {

          br.ScaleFactors = new Scale3d(br.UnitFactor);

        }

 

        // Instantiate our map between attribute references

        // and their definitions

 

        var atts = new Dictionary<ObjectId,ObjectId>();

 

        if (btr.HasAttributeDefinitions)

        {

          foreach (ObjectId id in btr)

          {

            var obj = tr.GetObject(id, OpenMode.ForRead);

            var ad = obj as AttributeDefinition;

 

            if (ad != null && !ad.Constant)

            {

              var ar = new AttributeReference();

 

              // Set the initial positional information

 

              ar.SetAttributeFromBlock(ad, br.BlockTransform);

              ar.TextString = ad.TextString;

 

              // Add the attribute to the block reference

              // and transaction

 

              var arId = br.AttributeCollection.AppendAttribute(ar);

              tr.AddNewlyCreatedDBObject(ar, true);

 

              // Initialize our dictionary with the ObjectIds of

              // the attribute reference & definition

 

              atts.Add(arId, ad.ObjectId);

            }

          }

        }

 

        // Run the jig

 

        var jig =

          new BlockJig(

            ed.CurrentUserCoordinateSystem, tr, br, atts

          );

 

        if (jig.Run() != PromptStatus.OK)

          return;

 

        // Commit changes if user accepted, otherwise discard

 

        tr.Commit();

      }

    }

  }

}

As far as I can tell, this updated version works well with various kinds of blocks with attributes, as well as taking care of UCS and annotation scaling considerations.

A big thank you to Roland for suggesting the topic and providing his own code for comparison, to Philippe Leefsma for the original code and to spiderinnet1 for their improved implementation.

Update:

As requested, here’s a quick demo of the updated BJ command in action, using an example file kindly provided by Roland:


Block jig

Feed/Share


10 Random Posts