As mentioned in this recent post, I’ve been working on my AutoCAD I/O-driven web-site on and off for the last few weeks. Lately I’ve had to think beyond certain assumptions I’d made about its architecture, and I thought it worth sharing those thoughts here.
The intention of the site is that you upload an image and then see some edge detection get performed on it, generating an engraving layer for a custom jigsaw puzzle. AutoCAD I/O gets used to generate a drawing that can drive a laser cutter, creating your 100% unique jigsaw puzzle. Basically making the world a better place through the power of jigsaws. ;-)
I had originally seen a lot of the work as being done in the browser: we’re already doing the edge detection there, for instance, so why not just send all the engraving information across from the browser to the AutoCAD I/O service in a JSON file, for it to use?
The problem with this approach was that the engraving data I needed to transfer to my Node.js service – and from there to AutoCAD I/O – was way too much to encode as URL parameters. Presumably there are other ways of encoding and passing this data from the browser, but it certainly gave me reason to rethink my initial approach.
Back to the drawing board, then. My next design worked on the basis that – rather than relying on the heavy lifting being done in the browser – we could upload the selected image (or a scaled-down version of it) and run our edge detection code in a custom web-service. We launch the upload in the background as soon as the image is selected, so the user should see no lag, from their side. When the user has selected the various options in the browser, the Node.js service gets called again – where our edge detection algorithm gets run, server-side – and will in turn call into AutoCAD I/O for the geometry generation.
That’s a major part of the value proposition of Node.js: you can run the same code on the server as in the browser. It did mean I had to factor away anything too browser-dependent – such as direct DOM modifications – and then make use of node-canvas (which doesn’t behave exactly like the browser-based canvas… more on this later). Getting node-canvas – which depends on a component called Cairo – working in Heroku – my chosen hosting provider – took quite some effort, but that’s a whole nother story.
This system architecture mostly works very well. You can check out the code in GitHub or play with the live site.
There are some caveats, though: the issue of transferring the engraving data to AutoCAD I/O is still there: right now there’s a limit of around 30K characters that can be transferred as parameters for a WorkItem request… which means I can typically encode an engraving of around (say) 200 x 300 with modest pixel density. This may or may not be enough, over time. I did manage to get a ~4x increase in the data I could transfer by encoding pixel data such as [{“X”:0,“Y”:1},{“X”:0,“Y”:2},{“X”:0,“Y”:3},…,{“X”:1,…}] as {“0”:“1,2,3,…”,“1”:“…”}, but that’s just pushing the problem further out.
For now the code iteratively reduces the size of the engraving – when it finds the payload is >30K – until the WorkItem request is small enough to succeed. An alternative would be to post the engraving data somewhere AutoCAD I/O can access it, rather than including it in the request. I expect this is going to form the basis of my next iteration on the design.
A related, alternative – and quickly discarded – architecture was to have the edge detection code running inside AutoCAD I/O itself. We would send a URL for the image to be downloaded and let the code run there. This would mean we would either have to run JavaScript code locally in AutoCAD I/O or re-write the code to work in an I/O-compliant language (i.e. LISP, .NET, C++). Over time it’s possible that the former is an option (today it is not) but I’m not about to re-write working JS code in C#. Even having the existing edge detection code work between the browser and the server created subtle issues: I found the same image loaded using the browser and node-canvas results in slightly different pixel values… which (I believe) leads to different edges being detected. I’ve attempted to mitigate this by adjusting the edge detection threshold when running on the server, but this is basically a kludge.
If you’re interested in other aspects of this design for this application – which will be forming the basis of my AU2015 class on AutoCAD I/O – then please post a comment!