In fact, the headset didn’t even make an appearance at Microsoft’s press conference at E3 2016.
It’s not exactly surprising, since outside of a couple of tech demos such as Microsoft’s Minecraft demonstration from the previous year, the focus of the Hololens has thus far been on enterprise use rather than gaming.
But although a consumer release is still a long way off, developers have actually been playing with the Hololens hardware for quite some time. We went to speak to one of them, Kazendi, to find out what the hardware is like to develop for, and what kinds of experiences we might finally get to use with it.
Describing Maximilian Doelle, Kazendi’s managing director, as being enthusiastic about Hololens would be an understatement.
“I personally think hands down it will change the way we work, it will change the way we act with digital environments. I think it’s the most revolutionary device ever released and I’ve tried every single emergent technology in the last 3 years” – and yes, that includes the Vive, I checked.
I asked Doelle for an example of what the Hololens was capable of and he talks about the version of Skype Microsoft showed off earlier this year which allows one user to place objects into the augmented world of another who is wearing the headset.
He also outlined another interesting use case that is currently being developed for Archibald Optics. Using the headset, users could virtually preview items, and then place them in their local environment to preview how they’d look.
This use case speaks to one of the major weaknesses I experienced with the device in the time I got to spend with it, and that’s the interactive element.
I can certainly see the advantage in opting for a controller-free design, but the device’s gesture-recognition felt slow and inaccurate, like something that was designed for large sweeping gestures rather than smaller more granular control.
It feels like moving to a touchscreen from a mouse.
But according to Doelle, the delays I experienced when using the gesture recognition were actually recommended by Microsoft to avoid accidentally using the wrong gestures. You have to really mean it for them to work.
So what’s the device actually like to develop for? Surprisingly, for something that isn’t yet widely available to consumers, the device’s SDKs are actually already quite developed.
In particular Doelle is positive about Microsoft’s decision to make the SDK freely available. “What is great is that the SDK is openly available to anyone regardless of whether you own Hololens or not… you just need a Microsoft account and there’s also a simulator available which simulates Hololens on your desktop.”
Interestingly, this creates a problem with the Hololens, because of how it feeds your real world environment into the experience. The Hololens emulator gets around this problem by offering pre-scanned rooms to place a virtual version, but naturally you’ll eventually need to buy the hardware for yourself if you want to scan your own rooms.
Then when you’re developing experiences it’s all a simple matter of constructing scripts in Visual Studio which are then exported into the Unity development platform.
So has Max and his team had to do any of their own coding to overcome any deficiencies in the Hololens SDK?
“It’s kind of all there, we haven’t had to yet write our own stuff,” Doelle says. “What we want to do is write our own gesture controls. So as you’ve experienced yourself you’ve got the air-tap and bloom, and then the air-tap and hold to move things. We wanted to experiment what can we do to the hands unfolding to for instance open something up, and we’re diving now really deep into the SDK to see how we can do that.”