Hacker News new | past | comments | ask | show | jobs | submit login
Hands-on with HoloLens: On the cusp of a revolution (arstechnica.com)
148 points by ingve on April 1, 2016 | hide | past | favorite | 91 comments



I played with Hololens at Build this week. I believe, after seeing it, using it, and even developing a little with it, that this device is truly revolutionary, and Google is wasting its money in Florida.

I shook my head back and forth like a dog trying to dry itself off, and the images I saw barely wiggled. The images are bright enough to completely occlude reality. The software dev stuff is from MS, so it is polished, simple, and powerful.

Voice recognition on the device, for example, is just handled by calling the voice functions in the core library, then GIVING IT A STRING. It listens for this plain text string. That's it. Sooooo simple and powerful.


What does Hololens do that Magic Leap doesn't? Information is very sparse, but I had the impression that the latter is a superset of the former.


It exists and is available to developers?


This one is available too, with 90° FOV: https://www.metavision.com


First-mover is an advantage, sure, but it hardly means that anyone who isn't first to the gate is wasting their money.

Anyway, I see downthread that VonGuard has made the silly claim that "Hololens can do everything Magic Leap is supposed to do", so I think it's safe to say that the "wasting money" thing is just a throwaway line that's not based on a lot of actual knowledge in the field. I feel kind of trolled.


All I will say is: try Hololens, then come back here and tell me it is lacking, dramatically.

It's a VERY good device. Surprisingly good. I expected it to suck.


I think what's happened is that you meant "supposed to do" as in "it fills the need that ML was aiming at", and I took as "it has feature parity". The latter is clearly not true - ML is supposed to have a very wide FOV, handle the accomodation reflex, simulate occlusion, do depth of field, etc. - none of which Hololens can do.


Alexa Skills kit voice recognition configuration is also done by just giving it a String. It's a wonderful time to be working on apps/APIs using voice recognition!


So you've demoed Magic Leap? What differences did you see that would make it a waste of Google's money?


Magic Leap is based on super complex tech that draws images right onto your retina. It's the kind of tech that will take them another couple of years to make small enough to package, and possibly another few years to build the tools to develop for it and to ensure it's not actually blinding people by shining bright lights right into their eye. A friend of mine used it and said Magic Leap, initially, is like trying to breath liquid oxygen in the Abyss: you fight it and can't really take that bright ass light shining in your eye. You have to really get used to it and relax yourself for it to work.

Hololens is just a screen in front of your face. It could never blind you or hurt your eyes. It just works, it works today, it fits in a product form factor today, and after using it for a while, I really don't see much need for an also-ran here. Hololens can do everything Magic Leap is supposed to do, and it can do it a LOT cheaper and safer. Plus, MS dev tools. You just know that Google's Magic Leap dev kit will, basically, be targeted at super intelligent people and require a lot of do it yourself stuff.


I havnt used either, but my impression was that Magic Leap's technology is basically projecting a light field into your eyes, so you eyes can actually focus naturally.. as opposed to looking at an image with fixed focal distance like the hololens. But I'm not super informed about either product so I could be wrong.


Thanks for the reply, even though I was hoping it wasn't basically hearsay and speculation (which it unfortunately is). Abovitz has repeatedly said that they absolutely want their tech to be safe enough for kids to use, so perhaps the safety concerns are not based on facts? Magic Leap apparently has active occlusion, and can run the spectrum from VR to AR/MR; pretty sure that makes it the potential leader, not an also-ran. I guess we'll see soon enough. Exciting times.


What I'm saying is, Hololens does ALL of those VR/AR things already, and there's not even a slight chance of eye damage involved. I really have trouble envisioning anything that would be lightyears beyond Hololens, but rather, something that would be a small step up from Hololens, and yet, still 2 years out.


Magic Leap apparently can draw things at more than one distance from you, unlike the hololens's fixed 2m away. That's a massive difference.


Jut wrong about Hololens. Stuff may be drawn close to your face, but the image is scaled very, very well. It may be drawn up close, but it still looks like it's very far away, way more than 2 meters.


You're just wrong about this. Magic Leap has accommodation support, Hololens doesn't. Any sense of depth you get from Hololens is simply stereoscopy.


Compared to the HoloLens ... 2 - Yes! There will be HoloLens 2 in the Magic Leap future 2020...???


Focusing lasers onto your retina is not new. There's been Virtual Retina Display for a while.

https://en.wikipedia.org/wiki/Virtual_retinal_display

http://www.hitl.washington.edu/research/vrd/

http://ascent.atos.net/ascent-look/

And here's a (terrifying) DIY version: http://eclecti.cc/hardware/blinded-by-the-light-diy-retinal-...


I remember it being mentioned in Wired in the 90's. But then, Westinghouse build a Mech (yes, a battle suit) in the 50's, and they never caught on. Of course, they never put anyone inside the mech because it could have snapped an arm off by bending the wrong way...

Some tech comes out early but doesn't catch on for a reason.


Sounds like Magic Leap is the future and HoloLens is now. I cannot imagine that we would be satisfied using all these clunky devices for long.


Um...they didn't "breathe liquid oxygen" in The Abyss...that stuff is instant-frozen cold.

See:

https://en.wikipedia.org/wiki/Liquid_breathing

I really can't believe that the light level in MagicLeap isn't adjustable to a comfortable level.


I believe that this has way more chance to revolutionize human/computer interaction than anything Occulus is doing. Outside of specific areas (like gaming and certain types of video based entertainment), this type of AR/MR is much more useful than Rift style VR.

You can interact with others, your environment, and other systems without the isolation or nausea of VR. On top of that, being able to supplement the real world instead of only temporarily replacing it seems far more valuable to me.

Now we just gotta shrink it down to contact lens size, and we're good to go!

edited for formatting


I think these are two fundamentally different things: AR, as would suggest, augments reality. VR however allows for creation of a completely synthetic reality. Reality can then be anything and you're liberated from reality.

So fundamentally VR has more potential than AR. Things like nausea may take some time to work out but they're not unsolvable. For example, if the issue is the disconnect between the acceleration/orientation sensors in your ears and the virtual reality we can just bypass those sensors and inject a signal directly into the relevant nerves. It might even be possible to do this through the skin in combination with training. The frame rate/resolution issues will also improve until eventually virtual reality will be indistinguishable from reality (ignoring tactile input for a sec). Sure physical movement in this environment is another problem. But the end game is something that is simply incredible/unimaginable.

With AR- sure you can add more synthetic information over reality. You still have tactile challenges (you can't touch it or interact with it). It is very difficult to combine seamlessly with the outside environment to make it more than just a fancy heads-up-display.

Not saying AR isn't cool and full of potential but I don't think we can compare its potential with VR. VR is the holodeck... AR is ... a heads-up display? Not to mention the need to carry AR with you wherever you go if you want to use it in the "real world", power it, compute, etc.

EDIT: One more thought while I'm at it. You can always inject the real reality into VR and thus make it AR but you can't remove reality from AR... Or if you could the two would basically merge. So AR can be seen as VR with "transparency" ... And VR is a superset of all ARs where you can't completely remove reality.


> So fundamentally VR has more potential than AR.

Fundamentally, VR just has a different potential than AR.

> Not to mention the need to carry AR with you wherever you go if you want to use it in the "real world", power it, compute, etc.

You need to carry the VR headsets and their host-devices with you too.

I believe we should clearly delineate the definitions of VR vs AR first.

Virtual Reality should be things like The Matrix. A reality that an individual experiences without modifying the real world for anybody else. Current technology simply shoves a screen right up against our eyes but the goal is obviously to plug that content straight into our very brains.

Augmented Reality should be things like an audio speaker or a video display; devices that "inject" manmade media into the real world; devices which generate reproducible experiences that everybody, even animals, can see, hear, touch and smell.

In AR a single device may provide content passively to multiple viewers. Like your music system or television or a hologram generator. Even your pets can see and hear whatever's playing and they certainly don't to purchase or wear anything.

VR aims to interface with your neurons and you're guaranteed to experience the full content. AR projects content into the real world which may or may not bounce on to our senses.

TL;DR: VR replaces reality. AR enhances reality. But that's just my opinion. :)


Thank you for your explanation. It seems like for any renaissance man programmer, either of these areas are ripe for indivuality and invention in terms of new experiences in the form of apps for these vr and ar devices. Thoughts?

And to maybe extend the discussion abit, what about devices that plug right into the centers of consciousness. Is that just more invasive, arguably better AR or is it VR? What do better hearing aids that sample at 10 ghz and connect directly to the nerves instead of hair cells. What are they considered? What about when I pipe my FLAC uncompressed music through the hearing implant -- VR or AR? It seems like the future is a combination of the two in far more complex ratios than just one or the other or split screen. VR probably has limits of perceptiveness if it fails to take model of logical non chaotic reality. It therefore can eventually become an abstraction of an existing event or well..present stimulus. Anyhow, I think the ideas are all really profundo. Looking forward to replies.

[1] https://teddybrain.wordpress.com/2013/08/28/a-brief-review-o...


My reply to daveguy might apply to some of your questions, not sure if you saw it: https://news.ycombinator.com/item?id=11412809


I like this distinction. By this definition something like Amazon Echo would be augmented reality, but the hololens would be virtual reality -- unless everyone is wearing a hololens? Since you mentioned television as an example I guess I understand what you are trying to say. Although I do think that definition adjustment would be a tough sell (TV as AR). That definition certainly works for the most extreme cases of VR and AR -- where VR is jacked into your spinal cord and AR is a true hologram with 3D tactile feedback, but both of those are still science fiction. If you had the star trek holodeck. Would that be VR or AR?


I think the differentiator between AR and VR should be this question: Can other humans and non-human lifeforms perceive a change or difference in reality as a result of that device?

TV can be considered a primitive form of reality augmentation. As I said even your cat can see and hear what's on. Even aliens who see in an entirely different spectrum (or don't have any vision at all) will be able to pick up the colors and sounds generated by that TV, through their monitoring instruments (just as we can "see" infrared and detect magnetism etc. through our tech.) A television set modifies reality in the space occupied by that set, for everyone, however minor that modification may be. By that definition I suppose even paintings can be considered AR.

Again this is just my opinion. I don't know how the Star Trek holodeck exactly works (in-universe) but it would be AR, as I'm assuming that anyone outside the holodeck will be able to see the generated visuals if they were to peek in through a hole, as it were.


I cant help but alarm you that even consciousness itself is a vulnerable state of existence in possible deception. Thats why Alan Watts wrote about the Wisdom of Insecurity. Dream theory exists. Etc. I dont know how well these notions you mention about knowing virtual from pure are founded when even our own standards can be undermined by being deceptive. Cant really put a definition on a jenga board and give it any credence. It sounds sickening but I'd say that we must use the definition of working memory in our distinction. Which unfortunately means that alzheimers patients and those with faulty memory are indeed, compromised with regard to their innate sense of a cohesive and sensible reality. They truly are living in AR..


If AR is VR with transparency, wouldn't that mean that AR is a superset of VR, in the same way that RGBA images are a superset of RGB images?

Or looking at it another way, isn't VR just AR with the lights off?


Isn't AR just VR with a re-projection of real-time video stream from front mounted cameras?


No, my wetware vision apparatus captures more information than any camera in existence, thus it would be a lossy filter.


Isn't VR just AR with a cover over the glass?

See, that definition is much much technically simpler.


I think you're being funny, but the Vive's AR is literally this.


You can't remove reality just by replacing the visual and auditory inputs. Your body still has to exist in the space it's in. AR acknowledges that that reality is still there; VR denies it.


I think YZF's edit is a very good point. VR does not have to deny the reality that is still there. You could include an image of reality within the VR environment. If you stick two video cameras and have a realtime feed to an oculus rift then you would essentially have AR in a VR headset (assuming the video feed is not perceptibly delayed 60-120 Hz refresh or less than 10ms frame-to-frame delay). Then again with AR apparently you can make the image bright enough to occlude reality (however you couldn't have a dark room as the augmented reality). With the inclusion of reality in VR I think you could essentially emulate the hololens and what you could or couldn't do between the two would be blurred.


Bright enough? Just add a plastic cover.


So fundamentally VR has more potential than AR

I couldn't agree more completely. I live in a physical world, and my god I wish I could connect it better to the information world.

I don't care at all about VR. I'll try it because it's new, but I have close to zero interest in gaming and beyond that there a very few use cases where it is interesting. Design maybe, but a lot of these uses cases actually work better as AR anyway.

You argue that we can bypass the orientation senses by injecting signal into nerves, but that the tactile challenge in VR is impossible?

Yes, you have to wear it. But glasses aren't a big deal - and will keep getting better - and AFAIK you have to wear VR gear too.


> I couldn't agree more completely.

Mmmh... you say you agree with the previous poster but your message seems to disagree completely.


Ah yes! Mistyping...


AR plus opacity to remove the real world world... is to add a plastic cover. You can't add the real world to VR without a camera and latency.

So AR is the superset of VR, where a physical cover makes them identical when it is on.


I don't see why VR should be isolating. Just because you're not immediately available to the people immediately around you doesn't mean you aren't available in a very personal way to many, many more people. VR is going to enable "face to face" communication across continents. It already has, it's just a matter if penetration now.

And VR doesn't cause nausea. Bad applications cause nausea.


You're only getting half the equation. Remember that the other half - the half that's not about the human - is the computer /understanding the space you're in/. This is software that understands there's a wall thing there, for whatever understanding of "wall" it has.


VR is for the living room. AR is for everything else.


I think there are rooms for both VR and Augmented Reality to co-exist.


Absolutely, AR can exist in all the real-life rooms, and VR can still make plenty of rooms of its own.


What I got from the video:

The gaphics integration into the real world is phenomenal. If you put a virtual object in a room you can walk around it. If you put something on a wall it stays there no matter where you walk.

Unfortunately, the interface -- how you interact with virtual objects -- is completely janky. Awkward pinching gestures to select keys from a floating virtual keyboard. Cursors that are supposed to represent your physical movement, but instead jump around.

It is tantalizingly close, but it seems like it would be just as annoying as it is neat or useful until someone comes up with a better (simple and reliable) interface.


Also the fox runs through objects that are even slightly complex, as it can't comprehend their depth (the camera woman).

Another interesting depth issue is when it ran to the back of the table and straight on to the floor as if they were continuous, because the view angle made it so the table and floor were right next to each other only separated by a line, so the fox didn't do a jump animation. I suspect a depth map of the room needs to be constructed to solve that issue, no mean feat.

These kind of issues just show me this tech is still years away from being what we'd expect.


Good points. I was most impressed with the wall identification/ interaction. If you sent up a big empty room, then everything could be VR, except with a concept of where the ends are. So it would be a walkable immersive environment. But I agree, it is not to the point where the average consumer would be happy. Or they would probably be happier with a much cheaper standard low-quirk vr headset.


If you sent up a big empty room, then everything could be VR, except with a concept of where the ends are.

That, plus the collaboration features would make HoloLens pretty awesome for any kind of collaborative product design. Take cars, for example. Right now, car mockups are done in clay, and while clay is easier to mold than metal, it still takes time and limits your ability to iterate. But with Hololens, you can have a design review where each of the participants puts on an AR unit, and now you can all see the same car and make changes to it in real time.

I'm excited to see what sorts of applications will be opened up by being able to display virtual objects in a "real" setting. I also agree that this won't be consumer-facing technology (at first). This is the sort of thing that'll take off with architects and engineers before it takes off with "normal people".


Conker can handle complex objects, but they have to have been present when scanning the scene; my cameraperson was moving around so she didn't get scanned.

The depth maps can get dynamically updated, so it's not insurmountable, but currently the process seems to take a few seconds. I'm not sure if this is a design decision, a processing limitation, a hardware data capture limitation, or something else entirely.


Hmmm... Kinect creates a depth map. Not sure why they didn't use the same technology.


It does, you can see it in the first video (it appears to be quite more than a depth map and actually extracting geometry from it, which fits with what the SDK news has been).

Pure speculation, but I would bet that it generates the room geometry at startup and then doesn't change it later. The device has limited processing power and needs to hit a high frame rate, so doing that makes sense.

A person walking into the scene wouldn't be accounted for in that case. It's also dependent on scans of the environment and the quality of geometry reconstruction, so things like a gap between objects that you didn't see from a good angle could easily run into that issue.


This is incorrect -- it does constantly rescan the room and will detect someone moving through the environment. It doesn't do that super quickly, though, and it can be difficult to capture the geometry of a moving person accurately.


I think there are two fundamental reasons it's "janky":

1. It's version 1.

2. It's an interface that nobody's used before.

Neither of these is a mark against the technology.

Go ask a caveman to use a version 1 iPhone, and see if he falls in love with it or just trash-talks it on an internet forum.


But the iPhone did not require you to "step inside" of it. There's a whole world of difference in the threshold for "good enough" between a device you hold in your hand and one that you put on your head, overlaying your whole view. With a smartphone, you can still go all xkcd:303 during periods of waiting. Head mounted? Not so sure that works out well.

That being said, maybe the hololens can be good enough nonetheless, because interaction latency is separate from pose/presentation latency and the latter is certainly far more make-or-break.

I suspect that the main achievement (or magic sauce) of the hololens is strict separation between position-finding plus persective-correct content rendering (which both together are quite high-latency, relatively speaking) on one hand and the final push to the actual framebuffer on the other, where a last corrective shift(/and maybe rotate(/and even more maybe zoom)) correction can be applied independently. The latter would be very low latency, based on the difference between camera frame at the beginning of the rendering process and most recent camera frame. This would be closely related (if not derived from) software-based shake reduction in cameras, which in turn might be traced back to the algorithms used in optical mice (which lack rotation/zoom).


I don't think there's a different level of 'good enough' required here - it's just that we're so early in the development of AR UI that we don't yet know what good enough will look like. The iPhone was the first touch interface that was low enough latency to feel like you were touching and sliding the images around behind the screen. Making those scroll and pinch-to-zoom interactions feel right was critical to its success. Certainly right now these Hololens interactions are not as slick and gratifying as flick-scrolling was in the original mobile Safari. And Apple and other manufacturers have now spent years optimizing screens to make the image seem closer to the surface of the glass and playing with varying success with haptics and pressure sensing and so on.


But people had phones before the iPhone, and they were sluggish as well. Jumping from keypad to touchscreen was a jump large enough to to be botchable (as a few attempts before the iPhone had clearly shown), but tiny nonetheless compared to the jump to a large HMD (from what exactly, btw?). There is no way to casually use the hololens like you might occasionally use a laptop sitting on the side of a paper-centric desk without going fully electronic, it's pretty much an all or nothing commitment. Nobody would tie a contraption like that to their head to use it just for a fraction of the tasks at hand.


I agree that the interface is much more important with this, where you are immersed, than iPhone where you can step away easily. The adjustments people made to iPhone were getting used to a non-textile keyboard and finding other controls on the screen -- but those controls worked well. Shake reduction could probably help significantly. Definitely some sort of stabilization in the representation between you and your cursor.

I'm not sure if position finding for the head is very separate from rendering, just because it is so smooth when he moves. I think head positioning is very separate from gesture positioning -- which is why cursors are jumpy. Maybe 13of40 is right on getting used to the interface -- maybe the user is holding his hand out too straight, not straight enough or something to that effect, but that still seems more of an interface problem than user problem. I do wish it was priced closer to oculus. At that oculus pricing I would probably get one, but at 4x it seems not quite there (interface and lacking apps).

13of40 - I think it is more "version 1" issues with the shake and a lack of the best human-to-computer interface rather than an "interface that nobody's used".

Put-that-there[0] is an interface from the early 1980's that seems like it would do well to reduce the janky interactions. Speech recognition probably isn't quite where it needs to be for a generic setup (as opposed to the structured setup of put-that-there).

Like I said, the computer to human side -- rendering virtual objects over a real field -- is beautiful. The quirks with incomplete map information aren't too distracting because we are good at filling in the gaps. The human to computer side needs work because it takes 30s to do a simple resizing operation. If you could say "make that screen 90 percent of my field of view" at any time the vid author wouldn't be complaining that getting too close causes clipping of his field of view.

[0]http://m.youtube.com/watch?v=0Pr2KIPQOKE


> ...adjusts the display output at 240 frames per second...

Hot Dang! I guess that ought to keep a VR object stable.


...Dayum. And I say that working in AR. That is /solid/ tracking - although the "I'm a competitor" in me feels compelled to point out that this was done in a room specifically set up for it. Your mileage /will/ vary in your own spaces, but still... damn, that's good tracking.


Unfortunately it's not solid tracking.

It's the standard structured IR light over the scene. Which means that sunlight, reflected sunlight, florescent lights, or any other IR emitter will wreck havoc over the SLAM algo they have running.

What they need is a visual SLAM and not just the IR solution. It works well in their dark candle lit studio where they did the demos. But outside, near windows, near reflections from outside, and flood sources all kill or badly affect it.


Damn/awesome. Someone I know got to see their demo at a conference a few months ago, and that someone figured they had IR targets under visible-light-opaque elements.

Do you know if the structured light is made by the unit, or is it an unmentioned peripheral?


HoloLens is a derivative of Kinect. Kinect 1 (with the Xbox 360) was structured light. Kinect 2 (with the Xbox One) is time-of-flight. I'm not sure which tech the HoloLens uses (I don't think Microsoft has specified one way or another, and I didn't have an IR camera to look to see if it was emitting a pattern), but I'm assuming it's based on the newer tech, because my understanding is that time-of-flight systems can much better handle having multiple devices scanning the same space. HoloLens handled this situation very well; I've seen at least 8 devices all scanning the same area without getting confused.

It's all driven by the unit, btw; no need for markers or external illumination.


Don't quote me on this, but I believe I've read that it is structured light. From my experience time of flight cameras are excellent at things like body detection and terrible at things like SLAM. (I have no idea why and could be wrong... it's been awhile since I've looked at the literature)


It seems like the environment scanning could be a better alternative to the array of external sensors the Vive uses to demarcate its play area. Certainly would be more elegant.


I'd love to see this coupled with a cheap thermal-IR camera, or microphone array (to see the spatial localization of sound around you; e.g. where is that darn leak coming from?!), or really with any sensor that collects spatial information outside the range of natural human perception. Some of these exist already in other forms (e.g. 'night-vision' goggles), but the power of this + that would be incredible, especially when you start combining multiple sensory 'images' into one seamless visualization.


Watching this video made me think, there's no reason why a VR setup couldn't similarly map the surroundings and incorporate that into the virtual world. One limitation of VR is that you can't really move around in the virtual environment, because you will bump into things in the real world (barring an omni-directional treadmill or similar). But if the system mapped out your real-word surroundings and either simulated or even just walled off any real-world objects and boundaries, you would avoid running into things, while having much more freedom to explore the virtual environment.

I'm not saying this would remove the use cases of AR, just that it could be a significant enhancement or enabler to many VR experiences.


The motion tracking is almost perfect.

The picture stuck to the window almost perfectly, which as someone in VFX I find very impressive.

Tracking head orientation and location is possibly the hardest thing to get right(next to object recognition)


This baby is very hungry. Room decoration - Nom. Gaming Nom. Fashion Industry. Nom. Gadget and Entertainment Industry (TV, Posters). Nom. Advertising Industry - and Self Expression. Nom.

Next ten years will be spend with feeding everything to it. He who wraps the last layer around the world, to sell it to the users, makes everything wrapped his organelles over time. Well played Microsoft, well played.

If we could decentralize the way consensed augmented reality is shared, we could even do something good for the world. Open Source WiFi Hubs offering free processing power and untraceable sharing. Mmmh..


I don't see it. Could you explain some of those industries? And the layers and organelles analogy? Also (and especially) the wifi point, I didn't get how that related at all.


Not sure why onetimePete is being downvoted. Although worded unusually, I concur with his implication - that after some time, advancements in this kind of technology will likely move us towards a ubiquitously augmented world. Although I personally expect we would settle on an open protocol for sharing and receiving public / private augmentations, it makes sense that the organisations who control augmentation technology wield an awful lot of power. Principally, the ability to make the world look, to you, however they want it to... perhaps subtly.

[edit: grammar]


I think P is assuming that these kinds of gadgets will rely for a while on processing done somewhere else closeby in order to shrink them further, sort of the Apple Watch crutch. This assumption misses the problem you have with latency - the refresh rate of VR/AR needs to be extremely tight for these objects to really stay in one place when you walk around at even a modest pace. So the limiting factor isn't just processing power and weight, it's power+weight+latency+price that you need to balance, with hard limits on latency and weight.


No clue about his wifi argument but AR could become very useful for wifi heatmapping allowing you to see where reception is particularly good or bad and let you place the access points accordingly.


It needs similarly untethered hand controllers like the vive's but with their own 'HPU's and depth camera system tracking. Then you won't need this awkward pinching system, and you can use buttons, which I've become very accustomed to in my time using computers.


This is the type of technology that is crying out for a killerapp, but may end up struggling to find one.

I had a Vodafone R&D team working with the Sony 3D AR glasses (aka 'Tom Hanks CES glasses'). We loved the tech and had a great playground, but couldn't get to the creation of one app that solved a clear painpoint.

We ended up with glorious tech demos, but they were all suited for a short-lived themepark attraction but not for ongoing use.

Anyway, I am excitedly waiting for someone who can do better than us.


It needs the equivalent of the PC desktop environment. An abstraction that makes sense for how to interact with it, and a way for it to interact with your surroundings.

Maybe IoT plus some connectivity / control standard similar to DLNA would make for a good base, where for example you could look at devices and bring up their menus to control them.


Once again, nothing at all to do with holography. Its amazing the shameless way tech companies who know better allow their marketers to misappropriate words to promote their parlor tricks. Microsoft is simply using their version of the Oculus with some overcomplicated video rig for AR to do something far from new. Yet another Rube Goldberg machine destined to be a punch line.

Real holography doesn't require glasses to view, and is not subject to the limitations and discomfort of route stereographic effects. This has been around in one form or another since the early 90's, it never gained wide appeal because of the fact you are secluded from your environment by the headset, and because using it (due to the inherent nature of stereographic effect with fake, nauseating parallax) can only be done for short time periods.

History repeats itself, and life imitates art.

Edit: For more on why I think this is a huge fad, just read this article about the death of 3D TV (which is fundamentally based on the same stereoscopic concepts):

https://www.avforums.com/article/in-memoriam-the-death-of-3d...

Choice quote:

"However the single biggest obstacle that 3D faced wasn't different versions, incompatible glasses, exclusivity, lack of content or screen sizes, it was simply that people didn't like wearing the glasses at home. Consumers were happy to wear 3D glasses at the cinema, which are predominantly the cheap and light passive variety but they were less keen to do so in their lounge."

Consumers didn't like wearing glasses then, and they won't like wearing them now.


Well, Google Chrome isn't made from actual chrome either ;)

And while certainly VR goggles may eventually share the same fate as 3D TV, I don't think the comparison is valid (other than they both involve stereoscopy).

It's a very different thing when sub-millimetre head tracking is involved, and when the virtual interocular distance is fixed to what a person's actual distance is. When everything appears to be the correct size and fixed in space, it crosses a significant perceptual threshold. I don't know if you've ever put on modern VR goggles, but there's no way I'd put it in the same category as 3D TV; in fact I don't think I'd put it in the same category as anything else I've experienced either.

And so I don't think you can take the lessons learned from people's willingness to wear glasses for a low-incremental-value 3D TV experience when the VR goggles have a markedly different offering - it's certainly plausable to me (though far from certain) that VR's value is high enough that people will be willing to put up with glasses.


Technically it's not holography, but using the term is the simplest way to convey the idea to the average person. What would you suggest calling it instead?


AR is much less likely to cause nausea, because you still see the real world which your brain will "anchor" to when trying to balance. If the tracking is fast and accurate enough, it won't be a problem.

Your only strong argument is that people don't like to wear glasses, but that could change if it is considered valuable enough. Have you seen the demos of what they can do?


What is the apparent resolution of the browser window shown in the AR world?

Is it comparable to conventional screens?

Just wondering whether I can replace my monitor by this thing :)


You almost certainly can't, for 2 reasons:

A) almost no software right now is designed properly for VR/AR. The best we can do right now is splat a WIMP interface inside a cylinder centered on you. We haven't figured out or translated much of anything to a VR native interface yet.

B) the resolution-cross-FOV is just too low for current, standard WIMP applications on all current headsets. I think a VR native interface wouldn't have as many problems at the current low resolutions, because the depth gives you more to work with for reconstructing images in your brain as you move around the objects. But again, they are largely missing still.

We'll get there eventually, and I'm banking on sooner rather than later. But as of right now, you won't be using any HMD as a desktop multimonitor replacement for anything other than driving or flight sims.


From the video it appears that a WIMP interface tethered to open surfaces, like a wall, is up & running.


It would be amazing if they could integrate this with something like Ultrahaptics, so you could feel the virtual objects too.

[1] http://ultrahaptics.com/


Can you wear these while wearing glasses?


The Ars reviewer in this video is wearing glasses


I wear pretty strong corrective lenses (about -8 in each eye) and HoloLens was fine; a lot more comfortable than the Vive with glasses on.


Huh, that felt more like native advertising than something I would expect honest journalism to be like.


Ars Technica doesn't do native advertising. All Condé Nast properties clearly call out native ads, so you'd know if they did. What feels dishonest about it anyway?


It is not native advertising.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: