November 2014

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30            










November 25, 2014

The 2015 Autodesk Cloud Accelerator

Autodesk’s Cloud Platforms team is running an interesting “incubator” program to encourage software developers to create applications that leverage Autodesk’s PaaS layer. (This currently includes the View & Data, AutoCAD I/O, Autodesk Fusion 360, Autodesk BIM 360 and Autodesk ReCap 360 web-service APIs.)

San Francisco

I won’t go into the details – you can get them here – but I will call out the important points as I see them.

  • 2 weeks of working on your cloud-integrated app in Autodesk’s San Francisco office
    • March 9-20, 2015
    • Autodesk experts on hand to help
  • Open to all, but there’s limited capacity
    • 14 companies will be chosen
    • You need to submit a proposal by January 10, 2015 to be considered
  • Autodesk will cover accommodation costs for 1-2 people per company
    • Travel is not covered

This is a great opportunity to have Autodesk help you develop a tool that meets a compelling need your customers have for a cloud-integrated application. Not only that, but you’ll have the chance to demo it in front of Autodesk executives at the end of the 2 weeks. That’s really valuable exposure for your work.

photo credit: Frank Schulenburg via photopincc

November 24, 2014

Stereoscopy

I’ve long been fascinated by stereoscopy, as I suspect is the case for most people lucky enough to have two functioning eyes. There’s something magical about a device that immerses us in a three dimensional scene by hijacking that fundamental input mechanism of ours, binocular vision. I almost always get that “oh wow” feeling: it just never gets old.

View-Master

I also happen to like collecting cool bits of vintage technology, although in an admittedly haphazard and opportunistic way: I have printing blocks, a typewriter, a TI-57 programmable calculator, an Apple Newton, a Palm Pilot and an iPAQ, to name a few objects that are cluttering my office.

Some time ago I decided to start collecting View-Masters – the devices and their reels – but I didn’t get very far. Like 90% of my interests this particular one stayed in the “interesting but no need to go further, for now” phase. But as VR has finally reached the masses – particularly with Google Cardboard and the upcoming consumer-oriented version of Occulus Rift, but there are other really interesting things happening out there – the whole View-Master itch has started up again.

Model BOver the last couple of weeks I’ve gone a little nuts on eBay, picking up Model A, B & C View-Masters (I’m most interested by models from the 1940s & ’50s) and lots of classic reels. The first few auctions were a bit brutal: the interesting auctions finished in the afternoon, Pacific time, so I had to set my alarm for 2:30am to be able to place bids in the closing seconds. (I’ve since started using an app that can “snipe” eBay auctions without you being online, which seems to work very well and lets me sleep a little more.)

Model Cs with box

What I find especially interesting about old-school stereoscopes like the View-Master is that a number of application areas that were relevant, back in the day, are now re-emerging for VR. For sure View-Masters were used as an inexpensive way to experience the world – without the need for travel – but they were also used to teach doctors anatomy and – during WWII – to help soldiers develop their skills for identifying ships and aircraft as well as for range estimation.

WWII study reel

For sure VR is reaching into similar areas beyond entertainment – both medical and military – but clearly the ability to synthesize 3D and then use stereoscopy to deliver it immersively is taking things much further, this time around. I expect it to be really compelling for the design industry, too, as do others here at Autodesk. For instance, check out Jim Quanci’s testimonial on this “heading for stretch goals” Kickstarter project to really help DIY VR go to the next level.

DIYVR

Anyway, I’ll be picking up a number of my “new” View-Masters when in Las Vegas: if you happen to be coming along to Cyrille Fauvel’s class at which I’ll be co-speaking, I’ll bring a number of them along to that. The class is all about stereoscopy, after all. :-) (At some point I’d love to try turning an old View-Master into a real VR viewer, something along the lines of this awesome 3D media viewer hack.)

Back to the subject of Google Cardboard… in parallel with the fun that’s been happening with A360, a number of noteworthy Cardboard apps have been popping up. Let’s take a look at some of the apps you can play around with to get a feel for this platform’s capabilities… (I’ve been using Android for this, by the way: you’ll find that not all the below apps are available for other devices.)

When you get your hands on a Cardboard kit – such as the DODOcase VR Cardboard Toolkits being given out at DevDays – you should head on over to Chrome Experiments for Cardboard – which really showcases the power of WebGL for creating VR apps – and the Google Cardboard demo app. There’s lots of fun stuff to be found there, such as Google Earth in 3D:

Google Earth

I’ve just started playing with VR Cinema, which seems fun. It allows you to play videos in a number of formats via Cardboard… it can’t retrofit 3D to 2D videos, of course, but even displaying your 2D home movies via the device is quite cool:

VR Cinema

Movies that encode 3D information will be even more interesting to watch using this app, presumably.

Perhaps the most View-Master-like of the apps I’ve seen – in spirit, at least – is Orbulus, which allows you to view any of a large number of photo spheres that have been captured and shared by people. There’s a slight camera offset between the views for the two eyes, but I don’t believe two separate spheres have been captured: the 3D effect isn’t that strong. It’s still a nice way to view spherical panoramas, though.

Orbulus

Now onto some more three-dimensional experiences delivered via Cardboard…

Jaunt have created a really cool Cardboard app allowing you to take the stage with Paul McCartney for a rendition of “Live and Let Die”. This one was captured using multiple cameras and gives a much more 3D experience. Well worth trying.

Paul McCartney's VR concert

Refugio 3D Space Station is an immersive VR environment – you start on a space station and can walk through teleportation portals to various worlds – which is very impressive. They also seem to have their own Cardboard-like hardware platform. (Thanks to Ugo de Maio for bringing this one to my attention!)

Refugio 3D Space Station

Finally, Volvo have launched their Volvo Reality Cardboard app allowing people to experience the new XC90. You need to be in North America to install this one, though, so I haven’t actually tried it myself.

Volvo Cardboard

New apps are popping up, every day. If you come across any that you find worth sharing, please post a comment.

Red View-Master photo credit: ansik via photopin cc

November 19, 2014

Project Memento now has direct handheld scanning

Some exciting news from the Reality Computing team: Project Memento – which has been updated to v1.0.11.3 on Autodesk Labs – now supports direct input from the Artec 3D Eva scanner. You can scan a 3D object or scene – generating a mesh – directly in the Memento software. I’ve been hoping/waiting for this to happen for some time.

Here’s a quick GIF showing – in broad strokes – how the process works. I’ve basically put a bunch of screenshots together – these were captured manually rather than at regular intervals – to show the flow in a lightweight manner, so please don’t attempt to use it to assess performance: the intention was just to show how scanner frames can lead to a mesh being generated.

Artec scan

You can currently only integrate a single scanning pass using Memento, but the team is working hard to support aggregation of multiple scans, too. They’re also hoping to support additional scanner models/brand… next on the list is Artec’s Spider, which should be supported along with automatic alignment of multiple scans in a couple of weeks or so. I’d personally love to see scanning via Kinect v2 supported, but the initial focus is very much on professional-grade scanning devices: the Eva and Spider are currently in the $19-22K price range, just to give you an idea.

More is happening with Memento, of course – the previously linked blog post mentions tighter A360 integration, reporting and export enhancements and SpaceMouse integration – but for me this is the big ticket item of this release and nicely indicative of the direction in which things are headed.

AU 2014 Handout: Using SensorTag as a Low-Cost Sensor Array for AutoCAD

[This handout is for “SD5013 - Using SensorTag as a Low-Cost Sensor Array for AutoCAD”, a 60-minute class I’ll be presenting at AU 2014. Here’s the sample project that accompanies this handout.]

 

Introducing SensorTag

SensorTag is a $25 device containing a number of sensors – an accelerometer, a gyroscope, a magnetometer, a thermometer, a hygrometer and a barometer – that communicates to a monitoring system (whether an iOS or Android mobile device or a Windows or Linux PC) via Bluetooth 4.0 (also known as Bluetooth Smart or Bluetooth Low Energy – BLE). Texas Instruments have packaged their CC2541 sensor platform in a consumer-friendly package with the intention of driving adoption from app developers who will use it to create solutions for health monitoring and the Internet of Things (just to name a couple of “hot” areas).

The device is powered by a single CR2032 coin cell battery, which is enough to power the low-power sensor array and Bluetooth communications for several months (this will depend on the number of sensors used and the frequency of communication, of course).

Developers might use the device to monitor the temperature of cooking pans and hot drinks, to track the location of keys using the newer iBeacon capability, or even to test how level a surface is.

Uses for SensorTag

At the core of the SensorTag is the CC2541 chip. We’ll see later on that this is also used to power other devices, too.

Here’s a block diagram of the SensorTag device, outlining the various components:

SensorTag block diagram

Here’s an assembly diagram alongside a close-up of the SensorTag’s internals, showing where each of the sensors resides:

Hardware description

Assembly

Applications for 3D design apps

While the SensorTag contains some interesting sensors, those that are of most interest for controlling 3D design applications are the spatial sensors. Just like many other devices, SensorTag contains an IMU – or inertial measurement unit – which consists of 3-axis accelerometer, gyroscope and magnetometer components. The IMU provides accurate information on where the SensorTag is located in 3D space.

This creates some interesting with respect to using SensorTag to control model navigation inside AutoCAD, for instance. Movements of the SensorTag can drive view changes in AutoCAD.

The same mechanism could almost certainly be implemented using a smartphone rather than a SensorTag, for instance, but there are certainly advantages to building a cheap, dedicated device for this.

Getting to know your SensorTag

The simplest way to connect with a SensorTag device and get to know its services is to install the SensorTag app for iOS or Android:

This will allow you to experience the data streaming from the device and determine whether it’s of interest to your specific domain.

SensorTag Android app

Transitioning to the Desktop

Hardware

Connecting SensorTag with a Windows machine is more complicated, however.

CC2531_USB_dongleYou need to use a compatible Bluetooth USB dongle, the TI CC2540 USB dongle. This can be bought for $49 from the TI website.

CC_DebuggerYou’ll also need a CC Debugger to be able to flash the CC2540 device with the required firmware, which is also available for $49.

These two components are bundled together in a couple of different packages. The first possibility was the CC2541 Mini Development Kit, which – for $99 – includes a dongle, a debugger and a “keyfob” device with a few basic components (LEDs, a buzzer, two buttons and an accelerometer).med_cc2541dk-mini_cc2541dk-mini_web

The second, the CC2541 Bluetooth Smart Remote Control Kit – for $50 more at $149 – includes a Smart Remote Control which contains a gyroscope, an accelerometer and a many more buttons (it’s a remote control, after all).

med_cc2541dk-rc_cc2541dk-rc_web

Software

Besides this hardware, there is some software needed to communicate with the SensorTag device.

BTool is essentially the driver for the CC2540 dongle: it allows you to map a virtual COM port that will be used for communication between the software and the SensorTag (via the dongle, of course).

BLE Device Monitor is the desktop equivalent of the tool we saw earlier for iOS/Android. It allows you to discover and understand the SensorTag’s services, as well as to upgrade the firmware (providing it already has appropriate firmware – for earlier versions you’ll need SmartRF Studio).

SmartRF Studio is a tool you can use in conjunction with the CC Debugger to flash or reflash CC254x devices (e.g. the dongle or the SensorTag).

 

Coding with SensorTag

The main resource available for developers interested in developing applications based on SensorTag is the SensorTag wiki: http://ti.com/sensortag-wiki

This lists links to a number of samples – including one to my blog – showing how to connect with SensorTag. As you’d expect, there are a number of mobile-oriented samples and toolkits as well as two main samples of interest to Windows developers.

SensorTag C# Library

This library – posted at http://sensortag.codeplex.com – simplifies the creation of SensorTag apps for the Windows Store. It makes use of APIs available in Windows 8.1, which clearly creates a platform dependency. That said, the APIs are clean and modern, allowing you to use language constructs such as async/await, for instance.

If you’re creating a Windows Store app for Windows 8.1 or higher, this is the way to go.

BLEHealthDemo

This is the project used as a basis for the AutoCAD integration prototype. The app was in development on .NET until Beta 10, but development has apparently since shifted across to Android. The existing codebase is probably 50% complete (according to the developer, and based on their milestones), but has some useful core code, despite the need for significant restructuring.

Here are some snapshots from this app. This first one shows the main page with a lot of UI:

BLEHealthDemo sample

It’s this tab that shows some of the potential for our purposes: we have a 3D view representing the orientation of the SensorTag device.

With the 3D view

 

Integrating SensorTag with AutoCAD

The main goal of this sample it to manipulate the current AutoCAD view based on spatial input from an external device (in this case we’re using SensorTag). For this simple scenario we’re going to focus on using the accelerometer data.

This gives us greater simplicity and reduced power consumption, as we’re only enabling input from a single sensor. That said, it doesn’t provide the best accuracy: we’ll see later what might be done to improve the situation.

This application can easily just be command-line based: we will interact with AutoCAD via a jig, but there’s no need to provide a GUI. During the jig we’ll poll SensorTag for input, using the data to modify the current 3D view appropriately and then forcing another cycle by pumping a Windows message. This is a similar approach to the one we saw with the AutoCAD sample integrations for Kinect and Leap Motion.

Understanding 3D input

An Inertial Measurement Unit (IMU) contains three main components:

  • Accelerometer
    • This measures linear acceleration – including that due to gravity
  • Gyroscope
    • Measures relative changes in rotational orientation
  • Magnetometer
    • Measures orientation relative to Earth’s magnetic field

The best results are obtained via sensor fusion – combining the results of all three sources of input – but clearly is more complex to implement.

Some accelerometers provide higher level roll, pitch and yaw values, but SensorTag’s does not: we have to calculate this for ourselves.

What we need from the SensorTag

We’re going to subscribe to two main services from the SensorTag: the accelerometer and the “simple keys service”, which tells us when one or other of the buttons is pressed.

For the accelerometer, we need to set the update frequency to the minimum interval, so we get called 10 times per second with sensor data.

The standard orbit mode – which is active when no buttons are pressed – really only needs X and Y values from the accelerometer: we’re going to ignore the Z value, as adding “yaw” makes the experience a lot less predictable. For the “zoom & pan” mode – active when the left button gets pressed – we’re going to use all three axes. We’re going to estimate the distance travelled along X and Y directions to drive panning of the view and along the Z direction for zooming.

The right button will simply reset the view to the original one: this is helpful as some rotation does creep in, over time.

Determining the distance travelled

Our accelerometer provides the acceleration along three axes. To get the distance travelled we need to integrate twice: the first integration provides the velocity at a particular moment which we need to integrate again to get distance.

Thankfully we don’t actually need the data to be particularly accurate: the main thing is to determine the general direction, although some measure of magnitude would be nice. This approach of doubly integrating accelerometer data is notoriously inaccurate: small amounts of noise – and issues such as device tilting – can lead to big issues in the data.

We’re going to use a fairly crude but effective integration technique: the trapezoidal rule:

Trapezoidal rule

trapezoidal rule - equation

Given the frequency of our sampling, this will prove accurate results. But if the sensor data is noisy we’ll suffer from GIGO (Garbage In, Garbage Out).

 

Improvements and Applications

Accelerometers – while accurate in the longer term as they don’t suffer from drift – can be quite noisy in the short term. It’s not possible to accurately track spatial position over a multi-second period using accelerometer data alone.

Gyroscopes measure relative rotation and so suffer from drift in the longer term (while being accurate in the short term).

For more “serious” applications, some kind of filtering should be applied to the data:

  • Complementary filters
    • The simplest filter type, assigns a higher weighting to the previous data and a smaller weighting to the newly arrived data
  • Kalman filters
    • These measure state over time, assessing the quality of the newly arrived data relative to the existing set. Clearly more complicated to implement
  • Mahony & Madgwick filters
    • These more advanced filters take 3-axis data from each of the accelerometer, the gyroscope and the magnetometer to get ultimate positioning accuracy

.NET implementations are available of the Mahony & Madgwick filters at:

http://www.x-io.co.uk/open-source-imu-and-ahrs-algorithms

The tricky part will be making sure the various data streams are synchronized in time.

Moving forwards

IMUs and spatial data are everywhere: integrated into all modern smartphones, but also in custom chips that are being placed into all kinds of devices. As an example, you can access the ‘deviceorientation’ event in an HTML page on a smartphone to effectively implement a VR application based on Google Cardboard. This event provides handy roll, pitch & yaw values.

It seems as though every day there are new IMU projects being launched: check FreeIMU or IMUduino as examples.

As spatial device input becomes more widespread, the tools to work with it will become easier: filters should be components you can integrate painlessly, for instance.

SensorTag is just one way to get started integrating this kind of input into your apps. It’s worth starting to think about how you might make use of spatial data – or data coming from SensorTag’s other 3 sensors, for that matter. There are definitely interesting use cases for integrating accurate spatial data into a 3D modeling environment.

[Source of some content: this previous post & http://www.olliw.eu/2013/imu-data-fusing]

November 17, 2014

Giveaway: Retrospecs, an iOS app for fans of retro computing

RetrospecsEvery so often I get an attack of nostalgia for the early days of personal computing. The latest bout was triggered by the discovery of an app called Retrospecs, an iOS-based image processing app that transforms photos to use the colour palettes of 8- and 16-bit computers.

The latest release – v1.7 – supports the following emulations:

  • Teletext
  • Apple ][ (Low res)
  • Atari 2600 (NTSC)
  • Intellivision
  • IBM CGA (6 variations)
  • BBC Micro (Mode 1 & 2)
  • Sinclair ZX Spectrum
  • Commodore 64 (Low res & high res modes)
  • Colecovision
  • Dragon 32 (PMODE 3)
  • Thomson TO7
  • MSX (Screen mode 2 and (for comedy value) 3)
  • Sinclair QL (Low res & high res modes)
  • Apple Macintosh (Original 1984 model)
  • Thomson MO5
  • Amstrad CPC (Colour & green screen versions)
  • Commodore C16/+4 (Low res & high res modes)
  • IBM EGA
  • Commodore Amiga (OCS - 320x256 in 32 colours and 640x256 in 16 colours)
  • Atari ST (320x200 in 16 colours and 640x200 in 4 colours)
  • Sega Master System
  • IBM VGA (Mode 13h)
  • Sega Mega Drive
  • Nintendo Game Boy
  • Amstrad 464/6128 plus
  • Super Nintendo Entertainment System

You can also apply filters and tweak the dithering to get better results – very handy if the image is dark and needs a vibrancy boost (this is especially important if choosing one of the more gaudy colour palettes).

I’ve just upgraded to the latest release on my iPad 2 (the only iOS device I own) and took the new modes for a spin. Here’s a GIF of the whole set being applied to a fairly boring photo of my neighbour’s house. [Click the image for the full-size, 3MB version where you can read the names of the various modes.]

Retrospecs

This app is really fun: a great way to create retro images for profile pics or greeting cards.

It’s also a real bargain, weighing in at just $0.99 in the US store. In a special pre-holidays giveaway, the first 10 people to post a comment on this blog post will receive a promo code to install it for free! Many thanks to @8bitartwork for offering these for this blog’s readers.

November 13, 2014

.NET goes open source | Machine learning with F#

Taking a rest from my AU prep, I headed across to Zurich last night for an F# meetup focused on machine learning. Primarily because I’m interested in machine learning as a field but also because it seemed a good opportunity to dust off my F# skills.

It was interesting to be on the train when the news from the first day of Microsoft’s Connect(); event hit the airwaves: the main headline being that .NET is going open source and cross-platform. Yes, folks, it’s actually happening: .NET 5 is going to be supported on Linux and OS X. And .NET is already on GitHub with the first pull request already approved. It’s not clear what exactly the impact of this news is with respect to desktop software on the Mac: the initial target for this is .NET Core – the server-targeting subset of .NET – but I have to see this as a great thing for the .NET community, however it plays out.

So, back to last night’s F# session, which was of course with a room full of people who are used to working with an open source language: F# went open source a couple of years ago. The session was run by Mathias Brandewinder, who is holding this same “coding dojo” at various locations around Europe. Mathias is originally from France but is now based in San Francisco. He’s clearly passionate about using F# for machine learning applications and engaging with the F# community as a whole.

The dojo was based around a programming challenge from Kaggle.com: to write a hand-written digit recognizer that takes a sequence of greyscale bitmaps (stored in a CSV file as a series of per-pixel integers) and attempts to classify them based on some training data (a subset of the data provided on Kaggle.com, I should add). We were sorted into groups, although the members of our group ended up deciding to attack the problem independently (Daniel, the person sitting to my left in the below photo, is a professional F# programmer… he was already porting his code to be parallelized on the GPU by the time I’d managed to finish the exercise).Machine learning dojo

Here’s the F# code I managed to come up with for the basic challenge (after a bit of tidy up).

open System

open System.IO

 

// Functions to go from an array of comma-separated strings

// to a list of tuples of the classified digit and an array

// integers representing pixels

 

let split a = List.map (fun (s:string) -> s.Split(',')) a

let ints = Array.map Int32.Parse

let tuple a = Seq.head a, Seq.skip 1 a |> Seq.toArray

 

// Classify using the nearest-neighbour algorithm, checking

// the Euclidean distance between the respective pixels in the

// images being compared.

 

let eucDist p1 p2 = p1-p2 |> fun x -> x*x

let dist a b = Array.map2 eucDist a b |> Array.sum

let classify a r = r |> List.minBy (fun t -> dist (snd t) a) |> fst

 

// Read in a CSV file from the specified path and create

// a list of tuples containing the classified digit and an

// array of integers for the pixels

 

let read path =

  File.ReadAllLines(path) |> Array.toList |> split

  |> List.tail |> List.map ints |> List.map tuple

 

let tpath = "Z:\\Data\\FSharp dojo\\trainingsample.csv"

let vpath = "Z:\\Data\\FSharp dojo\\validationsample.csv"

 

// Read in the training data and store it for later use

 

let tdata = read tpath

 

// Read in the validation sample and print the percentage of

// cases that are correctly classified

 

read vpath

  |> List.averageBy (fun x ->

    if (classify (snd x) tdata = fst x) then 1. else 0.)

  |> (*) 100. |> printfn "%g%% of entries classified"

As suggested by Mathias, the code implements the “nearest neighbor” algorithm (i.e. the k-nearest neighbors algorithm where k == 1). This morning I went ahead and coded up a version of the classify function that allows the k nearest neighbours to vote on the result, too…

let classify2 a r =

  r |> List.sortBy (fun t -> dist (snd t) a)

  |> Seq.take 5 |> Seq.countBy id

  |> Seq.head |> fst |> fst

… but it didn’t actually change anything: both versions returned the same result (which is the one expected, unless you start to implement more advanced techniques to recognise off-centre digits, etc.).

94.4% of entries classified

It was a really fun introduction to solving machine learning problems using F#. This is a field that’s clearly relevant when users interact with complex systems – including design tools – so I’m happy to have taken at least a baby step towards some real understanding of the domain.

Photo copyright Mathias Brandewinder.

November 12, 2014

VS2012 debugger type visualizers for ObjectARX

The C++ developers among you may remember the autoexp.dat file, which tells older versions of Visual Studio how to visualize custom C++ types during a debug session. Here’s an ancient post showing how we extended it for some basic ObjectARX types and another showing how to do so via a custom plug-in.

In Visual Studio 2012, a newer XML-based mechanism was introduced to do something similar. In today’s post we’ll look at a custom .natvis file that exposes some basic ObjectARX types to the Visual Studio debugger.

This file was created by Davis Augustine in response to a query from Augusto Gonçalves. Augusto has also posted the file to GitHub – something we talked about in the last post – which will allow you all to contribute changes to the file, should you so wish.

For Visual Studio (2012 or later) to find the .natvis file, it has to be in one of these locations:

%VSINSTALLDIR%\Common7\Packages\Debugger\Visualizers

%USERPROFILE%\My Documents\Visual Studio 2012\Visualizers

[The first requires admin rights and the second should have the 2012 changed to the appropriate number for newer versions of VS, of course.]

Here’s a recent version of the acad.natvis file, reformatted for this blog. This version supports AcString, AcArray, AcRxClass, AcString, CAdUiPathName, CAdUiVolumeDescriptor and resbuf, with future candidates being AcRxValue, AcDbObject, AcDbObjectId, AcGe*, etc. You’ll continue to find the latest & greatest on GitHub, of course.

<?xml version="1.0" encoding="utf-8"?>

<AutoVisualizer

  xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">

 

  <!-- Version 1.0e 11nov14 -->

  <!-- for acad/arx types -->

 

  <Type Name="AcArray&lt;*&gt;">

    <DisplayString>

      {{Len = {mLogicalLen}}}

    </DisplayString>

    <Expand>

      <Item Name="Len">mLogicalLen</Item>

      <Item Name="Buf Siz">mPhysicalLen</Item>

      <ArrayItems>

        <Size>mLogicalLen</Size>

        <ValuePointer>mpArray</ValuePointer>

      </ArrayItems>

    </Expand>

  </Type>

 

  <Type Name="AcRxClass">

    <DisplayString Condition="m_pImp!=0">

      {*(((wchar_t **)m_pImp)+1),su}

    </DisplayString>

  </Type>

 

  <Type Name="AcString">

    <DisplayString Condition="mnFlags==0">""</DisplayString>

    <DisplayString Condition="mnFlags==1">{mszStr,s}</DisplayString>

    <DisplayString Condition="mnFlags==2">

      {mchr.mwszStr,su}

    </DisplayString>

    <DisplayString Condition="mnFlags==3">

      {mptr.mpszData,s}

    </DisplayString>

    <DisplayString Condition="mnFlags==4">

      {mptr.mpwszData,su}

    </DisplayString>

    <DisplayString Condition="mnFlags==5">

      {*(wchar_t **)(mptr.mpPtrAndData),su}

    </DisplayString>

    <StringView Condition="mnFlags==0">""</StringView>

    <StringView Condition="mnFlags==1">mszStr,s</StringView>

    <StringView Condition="mnFlags==2">mchr.mwszStr,su</StringView>

    <StringView Condition="mnFlags==3">mptr.mpszData,s</StringView>

    <StringView Condition="mnFlags==4">mptr.mpwszData,su</StringView>

    <StringView>mptr.mpPtrAndData</StringView>

  </Type>

 

  <Type Name="CAdUiPathname">

    <DisplayString Condition="m_pathbuffer!=0">

      {*m_pathbuffer,su}

    </DisplayString>

    <DisplayString Condition="m_this_type==0">

      NO_PATH

    </DisplayString>

    <StringView Condition="m_pathbuffer!=0">

      *m_pathbuffer

    </StringView>

  </Type>

 

  <Type Name="CAdUiVolumeDescriptor">

    <DisplayString Condition="m_vol_localname!=0">

      {*m_vol_localname}

    </DisplayString>

    <StringView Condition="m_vol_localname!=0">

      *m_vol_localname

    </StringView>

  </Type>

 

  <Type Name="resbuf">

    <!--ARX/Lisp Function arg-->

    <DisplayString Condition="restype==5000">

      rtnone

    </DisplayString>

    <DisplayString Condition="restype==5001">

      {resval.rreal} rreal</DisplayString>

    <DisplayString Condition="restype==5002">

      {resval.rpoint[0]},{resval.rpoint[1]}

    </DisplayString>

    <DisplayString Condition="restype==5003">

      {resval.rint} rint

    </DisplayString>

    <DisplayString Condition="restype==5004">

      {resval.rreal} rreal

    </DisplayString>

    <DisplayString Condition="restype==5005">

      {resval.rstring}

    </DisplayString>

    <DisplayString Condition="restype==5006">

      soft pointer id

    </DisplayString>

    <DisplayString Condition="restype==5007">

      pick set

    </DisplayString>

    <DisplayString Condition="restype==5008">

      orientation</DisplayString>

    <DisplayString Condition="restype==5009">

      {resval.rpoint[0]},{resval.rpoint[1]},{resval.rpoint[2]}

    </DisplayString>

    <DisplayString Condition="restype==5010">

      {resval.rlong} rlong

    </DisplayString>

    <DisplayString Condition="restype==5016">

      list-begin

    </DisplayString>

    <DisplayString Condition="restype==5017">

      list-end

    </DisplayString>

    <DisplayString Condition="restype==5018">

      dotted pair

    </DisplayString>

    <DisplayString Condition="restype==5031">

      {resval.mnInt64} int64

    </DisplayString>

 

    <!--DXF/XData String-->

    <DisplayString

      Condition="(restype&gt;=1) &amp;&amp; (restype&lt;=9)">

      {resval.rstring}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=100) &amp;&amp; (restype&lt;=103)">

      {resval.rstring}

    </DisplayString>

    <DisplayString Condition="restype==105">

      {resval.rstring}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=300) &amp;&amp; (restype&lt;=309)">

      {resval.rstring}

    </DisplayString>

    <DisplayString Condition="restype==410">

      {resval.rstring}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=430) &amp;&amp; (restype&lt;=439)">

      {resval.rstring}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=470) &amp;&amp; (restype&lt;=479)">

      {resval.rstring}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=999) &amp;&amp; (restype&lt;=1003)">

      {resval.rstring}

    </DisplayString>

 

    <!--DXF/XData Double-->

    <DisplayString

      Condition="(restype&gt;=38) &amp;&amp; (restype&lt;=59)">

      {resval.rreal} rreal

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=140) &amp;&amp; (restype&lt;=149)">

      {resval.rreal} rreal

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=460) &amp;&amp; (restype&lt;=469)">

      {resval.rreal} rreal

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=1040) &amp;&amp; (restype&lt;=1042)">

      {resval.rreal} rreal

    </DisplayString>

 

    <!--DXF/XData Point-->

    <DisplayString

      Condition="(restype&gt;=10) &amp;&amp; (restype&lt;=17)">

      {resval.rpoint[0]},{resval.rpoint[1]},{resval.rpoint[2]}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=110) &amp;&amp; (restype&lt;=112)">

      {resval.rpoint[0]},{resval.rpoint[1]},{resval.rpoint[2]}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=210) &amp;&amp; (restype&lt;=219)">

      {resval.rpoint[0]},{resval.rpoint[1]},{resval.rpoint[2]}

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=1010) &amp;&amp; (restype&lt;=1013)">

      {resval.rpoint[0]},{resval.rpoint[1]},{resval.rpoint[2]}

    </DisplayString>

 

    <!--DXF/XData Int16-->

    <DisplayString

      Condition="(restype&gt;=60) &amp;&amp; (restype&lt;=79)">

      {resval.rint} rint

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=270) &amp;&amp; (restype&lt;=279)">

      {resval.rint} rint

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=370) &amp;&amp; (restype&lt;=389)">

      {resval.rint} rint

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=400) &amp;&amp; (restype&lt;=409)">

      {resval.rint} rint

    </DisplayString>

    <DisplayString

      Condition="restype==1070">

      {resval.rint} rint

    </DisplayString>

 

    <!--DXF/XData Int32-->

    <DisplayString

      Condition="(restype&gt;=90) &amp;&amp; (restype&lt;=99)">

      {resval.rlong} rlong

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=420) &amp;&amp; (restype&lt;=429)">

      {resval.rlong} rlong

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=440) &amp;&amp; (restype&lt;=459)">

      {resval.rlong} rlong

    </DisplayString>

    <DisplayString

      Condition="restype==1071">

      {resval.rlong} rlong

    </DisplayString>

 

    <!--DXF/XData ObjectId-->

    <DisplayString

      Condition="(restype&gt;=330) &amp;&amp; (restype&lt;=339)">

      soft pointer id

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=340) &amp;&amp; (restype&lt;=349)">

      hard pointer id

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=350) &amp;&amp; (restype&lt;=359)">

      soft owner id

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=360) &amp;&amp; (restype&lt;=369)">

      hard owner id

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=390) &amp;&amp; (restype&lt;=399)">

      hard pointer id

    </DisplayString>

 

    <!--DXF/XData 8bit int-->

    <DisplayString

      Condition="(restype&gt;=280) &amp;&amp; (restype&lt;=289)">

      {resval.rint} 8-bit rint

    </DisplayString>

    <DisplayString

      Condition="(restype&gt;=290) &amp;&amp; (restype&lt;=299)">

      {resval.rint} bool rint

    </DisplayString>

 

    <!--DXF/XData Binary Chunk -->

    <DisplayString

      Condition="(restype&gt;=310) &amp;&amp; (restype&lt;=319)">

      binary size={resval.rbinary.clen}

    </DisplayString>

    <DisplayString

      Condition="restype==1004">

      binary size={resval.rbinary.clen}

    </DisplayString>

 

    <!--DXF/XData Int64 -->

    <DisplayString

      Condition="(restype&gt;=160) &amp;&amp; (restype&lt;=169)">

      {resval.mnInt64} int64

    </DisplayString>

 

    <Expand>

      <Item Name="rbnext">rbnext</Item>

      <Item Name="restype">restype</Item>

    </Expand>

  </Type>

 

</AutoVisualizer>

To see the difference it makes to variable viewing, here’s how the debugger displays various kinds of AcStrings by default without the .natvis file:

+ acs1 {mnFlags=4 '\x4' mptr={mnPad2=0x0000004eaddde519 "" mpwszData=0x0000004ebabe1860 L"This is an AcString" ...} ...} AcString

+ acs2 {mnFlags=3 '\x3' mptr={mnPad2=0x0000004eaddde549 "" mpwszData=0x0000004ebabe1920 L"桴獩椠⁳湡愠獮⁩捁瑓楲杮ﷲ﷽꯽ꮫꮫꮫꮫꮫꮫꮫﺫﻮﻮﻮﻮﻮ" ...} ...} AcString

+ acs3 {mnFlags=1 '\x1' mptr={mnPad2=0x0000004eaddde579 "a" mpwszData=0xcccccccccccccccc ...} ...} AcString

+ acs4 {mnFlags=2 '\x2' mptr={mnPad2=0x0000004eaddde5a9 "" mpwszData=0xcccccccccccccccc ...} ...} AcString

+ acs5 {mnFlags=0 '\0' mptr={mnPad2=0x0000004eaddde5d9 "" mpwszData=0x0000000000000000 mpszData=0x0000000000000000 ...} ...} AcString

+ acs6 {mnFlags=5 '\x5' mptr={mnPad2=0x0000004eaddde609 "" mpwszData=0x0000004eb4cf2780 L"■듏N" mpszData=0x0000004eb4cf2780 " %Ï´N" ...} ...} AcString

Here’s how these are displayed using acad.natvis:

+ acs1 L"This is an AcString" AcString

+ acs2 "this is an ansi AcString" AcString

+ acs3 "a" AcString

+ acs4 L"u" AcString

+ acs5 "" AcString

+ acs6 L"this is both unicode and ansi" AcString

A big improvement, I’m sure you’ll agree. Many thanks to Davis and Augusto for making this happen!

November 11, 2014

Time to go Git

I’ve been chewing on this for some time, now, but I’ve decided it’s time to act. Well, as soon as AU is over I’ll act, anyway. Which I expect means it’ll morph into a New Year’s resolution for 2015. :-)Git

Back when this blog was launched, the Git project was still relatively young. But it’s clearly become the version control technology to use, especially when putting code out there in the open. And this blog is all about putting stuff out there in the open, after all.

Autodesk is using GitHub for our PaaS samples – which for now includes those for the Viewing & Data web-service and AutoCAD I/O – and it’s being internally used for more and more activities, elsewhere.

I’ve used GitHub a little bit for my own viewer samples – and I really like the fact Heroku links to it, to pull down the source to build the web-site/-service – but I feel the time has come to dive in more deeply and use the technology more.

In preparation for this, I’m currently on chapter 2 of Pro Git – it’s available for free as an e-book, so I just emailed the .mobi version to my Kindle – and it seems to be a great resource. I’ve already learned a lot from the first chapter and a half.

My plan is to take the various samples I’ve created for this blog over the last 8.5 years and manage them using GitHub, allowing others to contribute fixes. The vast majority of these samples are single files that will just be part of a main aggregator project (that’s how I have them on my system: I have a main project that I add the various C# files into when I develop and later need to test them), but there will be some additional standalone projects, too.

This is perhaps a bigger job than you might think, for a few reasons. I started to work through the files I have in my aggregator project, but found it was taking too long: I’ve somewhat foolishly polluted the project with code people have sent me to test issues, over the years, so I can’t just publish them all, as-is.

I’ve decided I’m going to take another track, to use the Typepad API to access post information, extract the HTML content – and the sections of code from them – and compare them programmatically against the files I have locally. This will at least allow me to take all the “valid files” and create a list of the ones that need manual intervention of some kind. At least that’s the plan – we’ll see how and when it gets completed. I think (or hope) it’ll be worth the effort.

November 07, 2014

Displaying a graph of AutoCAD drawing objects using JavaScript and .NET

It seems like I’ve been living in JavaScript land (and no, I deliberately didn’t say “hell” – it’s actually been fun :-) for the last few weeks, between one thing or another. But I think I’ve finally put the finishing touches on the last of the JavaScript API samples I’ve prepared for AU 2014.

This sample was inspired by Jim Awe – an old friend and colleague – who is working on something similar for another platform. So I can’t take any credit for the way it works, just for the plumbing it took to make it work with AutoCAD.

It’s basically an HTML palette using a handy open source library called D3.js – for Data-Driven Documents – and d3pie, a layer on top of that to simplify creating pie charts. The palette connects to the active drawing and asks to .NET to provides some data on the entities inside modelspace. From our .NET code we use LINQ to query the types of object from the modelspace’s ObjectIds, which we then package up as JSON and return for display in HTML.

This is all that’s needed to get this data, in case – it can all be done via the ObjectId without opening any of the entities (just the modelspace). LINQ is really great at this kind of query.

var q =

  from ObjectId o in ms

  group o by o.ObjectClass.DxfName into counts

  select new { Count = counts.Count(), Group = counts.Key };

When the user clicks on a wedge in the pie chart – representing the objects of a particular type – those objects get places in the pickfirst selection set, ready for something to be done with them.

Here’s a screencast of the application working:




Here’s the HTML…

<!doctype html>

<html>

<head>

  <title>Chart</title>

  <link rel="stylesheet" href="style.css">

  <style>

    html, body { height: 100%; width: 100%; margin: 0; padding: 0; }

    body { display: table; }

  </style>

  <script

    src="http://app.autocad360.com/jsapi/v2/Autodesk.AutoCAD.js">

  </script>

  <script src="js/acadext2.js"></script>

  <script src="js/d3.min.js"></script>

  <script src="js/d3pie.min.js"></script>

  <script src="js/chart.js"></script>

</head>

<body onload="init();">

  <div id="pieChart" class="centered-on-page">

  </div>

</body>

</html>

Here’s the JavaScript, including the Shaping Layer extensions…

function getObjectCountsFromAutoCAD() {

  var jsonResponse =

    exec(

      JSON.stringify({

        functionName: 'GetObjectCountData',

        invokeAsCommand: false,

        functionParams: undefined

      })

    );

  var jsonObj = JSON.parse(jsonResponse);

  if (jsonObj.retCode !== Acad.ErrorStatus.eJsOk) {

    throw Error(jsonObj.retErrorString);

  }

  return jsonObj.result;

}

 

function selectObjectsOfType(jsonArgs) {

  var jsonResponse =

    exec(

      JSON.stringify({

        functionName: 'SelectObjectsOfType',

        invokeAsCommand: false,

        functionParams: jsonArgs

      })

    );

  var jsonObj = JSON.parse(jsonResponse);

  if (jsonObj.retCode !== Acad.ErrorStatus.eJsOk) {

    throw Error(jsonObj.retErrorString);

  }

  return jsonObj.result;

}

var _pie = null;

 

function init() {

  registerCallback("refpie", refreshPie);

  loadPieData();

}

 

function refreshPie(args) {

  loadPieData();

}

 

function loadPieData() {

  var pieOpts = setupPieDefaults();

  try {

    var contents = getObjectCountsFromAutoCAD();

    if (contents) {

      pieOpts.data = contents;

      if (_pie)

        _pie.destroy();

      _pie = new d3pie("pieChart", pieOpts);

    }

  }

  catch (ex) {

    _pie.destroy();

  }

}

 

function clickPieWedge(evt) {

  selectObjectsOfType(

    { "class": evt.data.label, "expanded": evt.expanded }

  );

}

 

function setupPieDefaults() {

  var pieDefaults = {

    "header": {

      "title": {

        "text": "Object Types",

        "fontSize": 24,

        "font": "Calibri"

      },

      "subtitle": {

        "text": "Quantities of objects in modelspace.",

        "color": "#999999",

        "fontSize": 12,

        "font": "Calibri"

      },

      "titleSubtitlePadding": 9

    },

    "data": {

      // nothing initially

    },

    "footer": {

      "color": "#999999",

      "fontSize": 10,

      "font": "Calibri",

      "location": "bottom-left"

    },

    "size": {

      "canvasWidth": 400,

      "pieInnerRadius": "49%",

      "pieOuterRadius": "81%"

    },

    "labels": {

      "outer": {

        "pieDistance": 32

      },

      "inner": {

        //"hideWhenLessThanPercentage": 3,

        "format": "value"

      },

      "mainLabel": {

        "fontSize": 11

      },

      "percentage": {

        "color": "#ffffff",

        "decimalPlaces": 0

      },

      "value": {

        "color": "#adadad",

        "fontSize": 11

      },

      "lines": {

        "enabled": true

      }

    },

    "effects": {

      "pullOutSegmentOnClick": {

        "effect": "linear",

        "speed": 400,

        "size": 8

      }

    },

    "misc": {

      "gradient": {

        "enabled": true,

        "percentage": 100

      }

    },

    "callbacks": {

      onClickSegment: clickPieWedge

    }

  };

 

  return pieDefaults;

}

And here’s the C# code…

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Runtime;

using Autodesk.AutoCAD.Windows;

using Newtonsoft.Json.Linq;

using System;

using System.Linq;

using System.Runtime.InteropServices;

using System.Text;

 

namespace JavaScriptSamples

{

  public class ChartCommands

  {

    private PaletteSet _chps = null;

    private static Document _curDoc = null;

    private bool _refresh = false;

 

    [DllImport(

      "AcJsCoreStub.crx", CharSet = CharSet.Auto,

      CallingConvention = CallingConvention.Cdecl,

      EntryPoint = "acjsInvokeAsync")]

    extern static private int acjsInvokeAsync(

      string name, string jsonArgs

    );

 

    [CommandMethod("CHART")]

    public void ChartPalette()

    {

      // We're storing the "launch document" as we're attaching

      // various event handlers to it

 

      _curDoc =

        Application.DocumentManager.MdiActiveDocument;

 

      // Only attach event handlers if the palette isn't already

      // there (in which case it will already have them)

 

      var attachHandlers = (_chps == null);

 

      _chps =

        Utils.ShowPalette(

          _chps,

          new Guid("F76509E7-25E4-4415-8C67-2E92118F3B84"),

          "CHART",

          "D3.js Examples",

          GetHtmlPathChart()

        );

 

      if (attachHandlers && _curDoc != null)

      {

        AddHandlers(_curDoc);

 

        Application.DocumentManager.DocumentActivated +=

          OnDocumentActivated;

 

        _curDoc.BeginDocumentClose +=

          (s, e) =>

          {

            RemoveHandlers(_curDoc);

            _curDoc = null;

          };

 

        // When the PaletteSet gets destroyed we remove

        // our event handlers

 

        _chps.PaletteSetDestroy += OnPaletteSetDestroy;

      }

    }

 

    [JavaScriptCallback("SelectObjectsOfType")]

    public string SelectObjectsOfType(string jsonArgs)

    {

      // Default result is an error

 

      var res = "{\"retCode\":1}";

 

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (doc == null)

        return res;

      var ed = doc.Editor;

 

      //ed.SetImpliedSelection(new ObjectId[]{});

 

      // Extract the DXF name to select from the JSON arguments

 

      var jo = JObject.Parse(jsonArgs);

      var dxfName = jo.Property("class").Value.ToString();

      var expanded = (bool)jo.Property("expanded").Value;

 

      // We'll select all the entities of this class

 

      var tvs =

        new TypedValue[] {

          new TypedValue((int)DxfCode.Start, dxfName)

        };

 

      // If the wedge is already expanded, we want to clear the

      // pickfirst set (so the default value is null)

 

      ObjectId[] ids = null;

      if (!expanded)

      {

        // Perform the selection

 

        var sf = new SelectionFilter(tvs);

        var psr = ed.SelectAll(sf);

        if (psr.Status != PromptStatus.OK)

          return res;

 

        // Get the results in our array

 

        ids = psr.Value.GetObjectIds();

      }

 

      // Set or clear the pickfirst selection

 

      ed.SetImpliedSelection(ids);

 

      // Set the focus on the main window for the update to display

      // (this works fine when floating, less well when docked)

 

      Application.MainWindow.Focus();

 

      // Return success

 

      return "{\"retCode\":0}";

    }

 

    [JavaScriptCallback("GetObjectCountData")]

    public string GetObjectData(string jsonArgs)

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (doc == null)

        return "{\"retCode\":1}";

 

      // Initialize the JSON string to return the count information

 

      var sb =

        new StringBuilder("{\"retCode\":0, \"result\":");

      sb.Append("{\"sortOrder\":\"value-desc\",\"content\":[");

 

      using (

        var tr = doc.TransactionManager.StartOpenCloseTransaction()

      )

      {

        bool first = true;

 

        var ms =

          (BlockTableRecord)tr.GetObject(

            SymbolUtilityServices.GetBlockModelSpaceId(doc.Database),

            OpenMode.ForRead

          );

 

        // Use LINQ to count the objects in the modelspace,

        // grouping the results by type (all done via ObjectIds,

        // no need to open the objects themselves)

 

        var q =

          from ObjectId o in ms

          group o by o.ObjectClass.DxfName into counts

          select new { Count = counts.Count(), Group = counts.Key };

 

        // Serialize the results out to JSON

 

        foreach (var i in q)

        {

          if (!first)

            sb.Append(",");

 

          first = false;

          sb.AppendFormat(

            "{{\"label\":\"{0}\",\"value\":{1}}}", i.Group, i.Count

          );

        }

        tr.Commit();

      }

      sb.Append("]}}");

 

      return sb.ToString();

    }

 

    private void OnDocumentActivated(

      object s, DocumentCollectionEventArgs e

    )

    {

      if (_chps != null && e.Document != _curDoc)

      {

        // We're going to monitor when objects get added and

        // erased. We'll use CommandEnded to refresh the

        // palette at most once per command (might also use

        // DocumentManager.DocumentLockModeWillChange)

 

        // The document is dead...

 

        RemoveHandlers(_curDoc);

 

        // ... long live the document!

 

        _curDoc = e.Document;

        AddHandlers(_curDoc);

 

        if (_curDoc != null)

        {

          // Refresh our palette by setting the flag and running

          // a command (could be any command, we've chosen REGEN)

 

          _refresh = true;

          _curDoc.SendStringToExecute(

            "_.REGEN ", false, false, false

          );

        }

        else

        {

          acjsInvokeAsync("refpie", "{}");

        }

      }

    }

 

    private void AddHandlers(Document doc)

    {

      if (doc != null)

      {

        if (doc.Database != null)

        {

          doc.Database.ObjectAppended += OnObjectAppended;

          doc.Database.ObjectErased += OnObjectErased;

        }

        doc.CommandEnded += OnCommandEnded;

      }

    }

 

    private void RemoveHandlers(Document doc)

    {

      if (doc != null)

      {

        if (doc.Database != null)

        {

          doc.Database.ObjectAppended -= OnObjectAppended;

          doc.Database.ObjectErased -= OnObjectErased;

        }

        doc.CommandEnded -= OnCommandEnded;

      }

    }

 

    private void OnObjectAppended(object s, ObjectEventArgs e)

    {

      _refresh = true;

    }

 

    private void OnObjectErased(object s, ObjectErasedEventArgs e)

    {

      _refresh = true;

    }

 

    private void OnCommandEnded(object s, CommandEventArgs e)

    {

      // Invoke our JavaScript functions to refresh the palette

 

      if (_refresh && _chps != null)

      {

        acjsInvokeAsync("refpie", "{}");

        _refresh = false;

      }

    }

 

    private void OnPaletteSetDestroy(object s, EventArgs e)

    {

      // When our palette is closed, detach the various

      // event handlers

 

      if (_curDoc != null)

      {

        RemoveHandlers(_curDoc);

        _curDoc = null;

      }

    }

 

    private static Uri GetHtmlPathChart()

    {

      return new Uri(Utils.GetHtmlPath() + "chart.html");

    }

  }

}

I’ve actually found this to be quite a useful little sample: not just from the way it shows interactions from HTML5/JavaScript to .NET and back but also from a user perspective. If you want to quickly select all the objects of a particular type from a drawing – perhaps to change their layer or erase them, then this tool could be very handy. It’s essentially a streamlined, graphical version of the QSELECT command.

November 06, 2014

Adding more speech recognition to our stereoscopic Google Cardboard viewer

One of the pieces of feedback I received from internal folk on the prototype VR app I developed for Google Cardboard and then added voice recognition to was “it’d be really cool to add ViewCube-like navigation commands”.

Which basically meant adding “front”, “back”, “left”, “right”, “top” & “bottom” to the list of voice commands recognised by annyang and have them hooked up to a function that changes the view accordingly. The main complication being the fact that some models come in with “Z up” despite the majority having “Y up”. Hopefully none will come in with “X up”, an eventuality I so far haven’t planned for. :-)

I also fixed a bug which mean the camera’s up direction would flip when you zoom in or out, which caused the orientation to change. The overall experience is pretty stable, at this stage.

Here’s a quick recording of the viewer in action. I managed to record this directly on my phone using adb, which was pretty cool. The only downside being I had to record the voice separately on my PC and combine the two tracks afterwards in Camtasia: it turns out the browser’s voice recognition competes with any local voice recording, anyway – you hear a stream of beeps indicating them ping-ponging back and forth and no voice commands work – so this ended up being the best approach available.




The video ends a bit abruptly: the video recording stopped at exactly 3 minutes, so I ended up truncating everything a bit more than expected. Nothing I said afterwards was of particular importance, in any case.

Here’s a link to the updated HTML page along with the accompanying JavaScript file.

The ADN team is busy demoing this – along with other samples, of course – at their annual Developer Days around the world. I’m really looking forward to catching up with them at the DevDay in Las Vegas and experience hundreds of developers giving this a try at once – hopefully with simultaneous voice commands. Should be quite something! :-)

Feed/Share

10 Random Posts