I was expecting this to be another phone holder, much more interesting that you hacked together some hardware. Does this feature a low persistence display? Also, you mention "cheap tracking for next big update", is this just going to be an improvement over your current tracking or a full 6 DOF tracking? I don't think I've seen any hobbyist 6 DOF tracking for VR yet.
Oh, one note, in your read me, the part about Jonas convincing Chinese factories to sell you parts at premium prices should be changed. You probably meant he got really good prices, but premium pricing means basically the opposite.
> I don't think I've seen any hobbyist 6 DOF tracking for VR yet.
Webcam based hobbyist 6dof headtracking for use with desktop screens has been around for many years (freetrack, ftnoir/opentrack and so on), but the quality is rather dreadful. Still, people who don't mind glacial latency from very heavy smoothing have been very happy with those solutions.
But VR requires so much headmounted technology that tradeoffs between cost/weight and quality shift a lot. For desktop tracking, adding head mounted sensors to the existing single camera 6dof tracking solutions would at least double the amount of hardware involved. But when your baseline is a full VR headset, those sensors are an almost negligible extension. Gyro sensors and/or an "inside out" camera could easily add a lot of precision/speed (effectively the same metric, with filtering) to the rotary axes of single cam 6dof. Last time I looked at opentrack it already supported some sort of fusion between stationary camera and Android gyros. This would be a good starting point for a hobbyist VR rig (not room scale).
“I know this request is a little bit out of the ordinary, but would it be possible for you to charge us premium prices for this item? We’d like to pay more. No? Please, we’ll only buy it if you charge us more. Ok, thanks!”
About early age children using VR, there was a warning of potential risk for children using VR, two/three years ago:
https://uploadvr.com/study-vr-children/
it’s not a big study nor does it comes to any concrete results, so I was wondering if you knew of more data on the subject, or knew of any real word feedback on the matter.
From what I've read (which isn't hard to find, but I'm mobile right now), the danger of VR for kids is not an issue, and in fact can actually more quickly bring to attention if they have a biological issue (like more quickly reveal if glasses would benefit them).
Motion-to-photon latency isn't mentioned. It's basically the most important characteristic of a good VR set. That's why all smartphone-based VR solutions suck and make you sick.
> Motion-to-photon latency [...] most important [...] sick.
Yes and no. And the "no" seems underappreciated.
I normally run my Vive and Lenovo WMR at 30 fps on an old laptop with Intel integrated graphics. So why hasn't it made people sick? Camera passthrough AR helps. Likely the "comfort mode"-like tunnel-vision effect of not doing barrel or chromatic aberration correction. Perhaps not doing predictive tracking, so lag but no judder. Maybe "visible out the corner of your eye" framing. Maybe something else.
Most VR reporting starts from an assumption of games. Games, games, always games. So "you are there" immersion, with no avoidable visible artifacts, no AR, etc. So 90 fps, constant latency, high GPU and HMD bandwidth demands. But if you don't care about games, if you just want a desktop replacement/alternative... the design constraint space looks very different.
I'm looking for something that could replace big desktop monitors and annoyingly unportable laptops when I'm on the go. A HMD (or VR headset) could fit the bill. But I want to use it as a generic display device (that e.g. presents as a large canvas floating in space) rather than a VR-specific gaming or movie gimmick. Unfortunately nearly all the discussion I find online is about gaming. So I have no idea what to expect if I were to buy a headset and plug it into my laptop running linux with intel graphics..
It sounds like you might know something about the desktop experience. Do you have any thoughts, or useful links?
> looking for something that could replace big desktop monitors and annoyingly unportable laptops
Tl;dr: wait for Xmas.
A current design point: gaming laptop; Windows; SteamVR; a virtual desktop like http://bigscreenvr.com/ ; HDMI dummies as mentioned by @sgtmas2006, so Windows thinks it has more monitors; RDP/VNC/etc to a linux VM. All off the shelf.
But gaming laptops are likely "annoyingly unportable laptops". If you can segregate when you want portable vs monitors, something like a Lenovo X1 Carbon is portable, and has Thunderbolt 3, so you can plug in a rather less portable external gpu enclosure.
Current HMD resolution is still quite low. If you use big desktop monitors for their resolution (eg, lots of text), rather than merely their size (eg, vision impairment), that's a problem. If the big monitors are full of little terminal windows (eg, ops), I saw a report of someone being happy.
https://varjo.com/ , despite its website, appears on track[1] to release a much higher resolution HMD this year. For only "under $10k". With that, you could just render your big monitors. Current software support is unclear.
So a takeaway could be "wait for Xmas 2018".
> what to expect if I were to buy a headset and plug it into my laptop running linux
The HMD shows up as a normal monitor. (In future, that may require telling `xrandr` "yes, I know it's an HMD, just treat it as a normal monitor please".) From there, it's like google cardboard. There's a border region you can't see, and each eye gets half the screen. If you close one eye, you can just drag/open windows and use them normally. I open a full-screen browser window, and render stereo for 3D. The pixels are magnified, and can easily be seen individually.
As for tracking and controllers... attention to linux largely evaporated when VR transitioned from niche to chasing the Windows gaming market. I know of nothing usable with Windows MR HMDs. (There was/is? a project to unpack their odd camera format, and then one might run say ORB-SLAM2 for tracking, but that's all diy). And doesn't gget you Windows MR controllers. For Vive, there's SteamVR and WebVR, but I believe both still require major setup effort, with buggy results. http://idav.ucdavis.edu/~okreylos/ResDev/Vrui/ exists. As do low-level Vive device drivers.
> ... with intel graphics
Ah, that is very not mainstream. I know of only my own stack[2] easily available. And that's for Vive tracking. Tracking Windows MR HMDs then requires taping on a Vive controller, or diy software dev.
So unless your usage patterns happen to mesh nicely with rather severe current tech constraints... using VR to replace big monitors isn't quite ripe yet. Maybe by Xmas. But once that threshold is passed, I expect a rapid and large impact.
I'm not saying it's not a cool achievement, but they are comparing their product with the existing products. I would like to know how it stacks up. Maybe it's even better, who knows.
What in a smartphone based VR solution would give to it an inherently slow Motion to photon latency ? The communication between CPU and motion capture ? The 3d rendering ?
Well to be fair, he doesn't own the IP he creates at Oculus, Facebook does. It is entirely possible he'd like to open source it, personally (though I have no idea if this is the case).
As chief architect he probably has some sway to push FB to open source it.
Even if he can't its pretty hypocritical for him to ask these people to open source their work, allowing Occulus to benefit from it rather than worry about them as competition.
Is it hypocritical to encourage someone to open-source some IP that they own, while not being allowed to open-source some IP that you don't?
It seems like a chief architect would have a lot of pull in technical decisions, but a lot less in business or legal decisions. Open-sourcing the code would have to be considered from all three perspectives.
We're all impostors, so what? I sometimes feel that this basic insecurity in IT really creates strife and abrasiveness between people instead of mutual respect. I find it better to embrace your own imperfections and in this way find your own strengths, which, sadly, are too often shadowed by the need to keep up appearances. There is no perfect IT person. We all suck. All tech sucks. People in general suck at things people do. This is life, and I am ok with it: I am content with who and where I am, and I am ok with other people having different goals, different life experiences and different achievements. My friend works at NASA; I've tried a lot of psychedelics; somebody has been backpacking around the world with 5$ in their pocket - and we're all deserving to be allowed (by ourselves) to be happy.
That having been said, these kids are really cool and I wish them the best of luck!
Without knowing anything about the quality I can say, this is pretty amazing. I mean putting together a team which builds hardware and software for what they want to have.
Quick question for any experts reading this - do Oculus or VIVE use any sort of dead reckoning/movement prediction in their tracking? Also does anyone have any documentation on the APIs for this, or information on how the devices keep track of their latency and calibration information?
There are fundamental limits on latency, especially with spread-spectrum transmission when all of this goes wireless. As accurate as the tracking and pointing are for controllers, I feel like some additional extrapolation is happening. It would be great to have an open source library for this so we can give hand-built rigs the best tracking that's mathematically possible.
>Quick question for any experts reading this - do Oculus or VIVE use any sort of dead reckoning/movement prediction in their tracking? Also does anyone have any documentation on the APIs for this, or information on how the devices keep track of their latency and calibration information?
Absolutely. Vive uses a combination of IMU based dead reckoning combined with Lighthouse sensors to provide tracking. The dead reckoning is super important for maintaining tracking during sensor occlusion. The API it interfaces with is SteamVR, which is mostly open source, so you can even see how they’re doing it. The new generation Vive Pro will combine this along with stereo camera CV based inside out tracking for even better precision.
This is pure speculation on my part, but it’s the only concievable use for the cameras. They are laid out in the exact same way as the Samsung Odyssey headset which does that. I cant imagine they have solved the compositing issues involved with doing pass through AR yet, although I’d be impressed if that’s the case.
Yes it uses both, relative and absolute measurements (each with its own drawbacks) into what's usually called sensor fusion. It's very well explained here: http://doc-ok.org/?p=1478
I know the Vive does some kind of motion prediction for its controllers, at least in the case they lose tracking: if you quickly move a controller out of view of the lighthouses (kind of hard to do if you have the lighthouses set up well; I had to hide the controller under my shirt) then the system will show the controller continuing to move in the direction it was moving for a short bit.
Can't wait for the day when we truly have modular VR.
It's going to take a few years and I know Oculus has the right idea with their eco system but it sort of bums me out that the Vive didn't end up being the hackers headset.
Today it feels like the Vive was built out of spite and HTC got lucky Valve went them first.
I think the less cynical answer is that HTC often has good ideas for physical devices but then often fails to follow through and iterate well. They're also just struggling as a company in general.
But damn, so I 1000% agree with being bummed about it not being the hackers headset. I preordered the Vive because of a VR video of a guy programming the environment he was in at the moment:
https://www.youtube.com/watch?v=db-7J5OaSag
The LCD Windows MR HMDs seem pretty close to being a sufficient display. The useful visual area is something like 900^2 px. Big, visible pixels, that are ok with a 7 pt font. And one can do subpixel rendering, so ~3x the horizontal resolution. If you are ok with working on a small laptop screen, you might be ok with this.
Otherwise, there's Varjo[1] later this year. Similar resolution to looking at your laptop. ~55 px/deg. For "under $10k".
For gloves, that you can still type in... we'll see. I had hope for https://senso.me/ , but they've gone quiet. If you don't mind spending $10k, there are existing trackers.
For software... sigh. Maybe if market size explodes this Xmas, things will improve. There's been a lot of "do something, then abandon it, because the area isn't ripe yet" over the last half decade. And software dev in VR hasn't been where people's attention is focused.
Oh, if one's interest is in-VR creation of VR, instead of in-VR general software dev, then there are a bunch of "authoring environments" being worked on.
Reminds me of Second Life, in which the tools to model, edit, and script the 3D environment were all integrated into the environment itself (and realtime-synchronized over the network, no less).
There's OSVR, which is about as modular as you can get - there are plugins to interface with e.g. SteamVR, and it's compatible with VRPN for peripherals. (Very much a dev kit though, if you're looking for something you can just plug in and use the OSVR headsets are absolutely not that.)
I have to wonder if this team could have gotten more mileage out of working with OSVR, since a lot of the work to connect it with existing VR apps is already done. But there's certainly value in doing it all yourself!
Imagine this with a 4k panel. Depending on the lenses, it could be the highest resolution HMD currently available.
Panelook seems down, but even if only 4K@30 5.5 panels are currently available/affordable... well, no gaming, but I use Vive and WMD at 30 fps as a desktop alternative.
I've done this in the past. I had 3 false displays set up using virtual desktop and HDMI dummies. When I got used to it, it was very pleasant. One day I sat down, did some working, and then watched some shows. I actually fell asleep with the headset on once. It was, to say the least, confusing when I first woke up in outer space.
It was very pleasant. I wish I had a Rift to try it with, the lenses on the Vive are rough, with my only experience before the Vive being the DK2. http://www.roadtovr.com/wp-content/uploads/2015/05/wearality... This effect was highly noticeable for me.
Never had an issue with my mouse or KB. I went onto the app where you were in a living room, with other people in the online room around you, and you each had your desktop in your lap. It was pretty cool to play Overwatch and this guy comment when something sick happened.
I don't believe VR belongs in the gaming scene. Business / EDU all the way. I'd be curious if there's any plans to research trying to get kids to develop synesthesia using VR.
I do also sit at a desk chair. I ended up stopping using it a lot unless I'm trying to drown out everything around me because I am very paranoid if I can't see all exits/entrances to a room with my back to the wall. Still probably the most eye-opening experience for VR for me. Being able to come into my office, with only my desk in a part of the room, sit there with my KB&M and virtual displays, and then move around my room separately for things like Tilt Brush would be amazing.
> I'd be interested in how your desktop alternative works with the VR displays.
"your desktop alternative" -> collection of crufty exploratory kludges. :)
I'm running custom stacks on linux and X. Browser as compositor, and three.js. React. Tracking from low-level Vive lighthouse driver[1], or laptop webcam optical, or none. Camera passthrough AR.
Most recently, I'd just plug a Lenovo WMR HMD with a duct-taped-on camera into an old laptop with integrated graphics; run a browser full screen on the HMD; run xpra to put emacs and xterm on laptop and HMD; with the camera AR in background; and sometimes track head motion using the laptop webcam and yellow duct-tape HMD marks. Boring and crufty. Though emacs looks kind of "hip" with text changing depth.
> Do you edit text, code, email and surf
Desktop is just xpra[2]. A remote desktop that pulls in individual X programs. No "plug in a null display device" Microsoft silliness. Text is ok. Video is low fps (though I've not tried to improve it). I'd not want to surf in a such small window - think a 900 px square.
Because of resolution (and budget) limits, the UI is more 2D on sphere than 3D. Just picture normal desktop windows. Vive resolution was unusably low and PenTile. LCD Windows MR resolution is tolerable with individual pixel control (thus the 2D). 3D might be ok with subpixel rendering, but I don't yet have a laptop with a dGPU, so I've been putting it off. Given 2D, I'm still just using xpra's kb/mouse handling. Bits of a React-and-three.js approach. For hand tracking, leapmotion is unusable, my finger tracking with fiducial markers is currently too slow, and gloves with IMU fingers are still like $4k+. So I'm basically just doing exploratory spikes, waiting on late-2018 hardware availability and prices.
> desk chair?
Desk chair, conference room, classroom, subway. All sitting with laptop keyboard. I've explored room-scale UI before, but for this, just emacs and xterm in space. Not even in space, just in your face. I'm tired of burning life fighting ephemeral display and input limits. I'd like to do software dev and collaborative compilation and category theoretic type systems in 3D. But I'm going to wait for the needed hardware, rather than struggling against the glacial pace of tech progress.
Wow, thanks for this response. It sounds like you should go for a phd in VR/AR non-game HCI. I like that you added AR or something like it, do the passthrough cameras do edge detection with motion compensated infill (more pixels come through the closer/faster they move) so one doesn't feel cut off from the outside world?
Too bad the IMU finger gloves are so expensive, doesn't make sense since the sensors are only $9 each qty 1. Using gloves like these [0] along with a HMD that had integrated wide field cameras could have great finger tracking support.
Just simple passthrough video for now - no vision. And mono, in a bid to reduce eye strain from long hours of use.
> do the passthrough cameras do edge detection with motion compensated infill (more pixels come through the closer/faster they move) so one doesn't feel cut off from the outside world?
Do you mean making "big vision-occluding windows" more transparent when the world behind them changes and/or head spins?
> expensive, doesn't make sense
Existing small high-end market; immature big low-end market; limited ability to do market segmentation; no HomebrewComputerClub-like market-bypass; lack of incentives to avoid collateral damage to rate of progress.
My hope is Xmas 2018 will see both finger and eye tracking get products priced for consumers.
> [color] glove
Yeah. Sigh. They did a startup... and were bought by Oculus. I don't know of an open source release. So here we are, literally a decade later, and you can't easily get one.
It's been interesting to watch VR's widespread innovate-startup-acquisition-unavailable dynamic, and contrast it with say the ferment of HCC. It attracts investment, but devastates the market ecosystem, and cripples research.
Camera passthrough AR seems to greatly relax latency constraints. Like down to 30 fps, so 33 ms per frame. And as long as the AR video is visible in background (3D objects small or transparent and/or with clipped fov), the 3D rendering can have a lot of lag. Like multiple 100 ms. More like head as mouse - point it somewhere, and watch the 3D overlayed objects stutter into place. So latency outliers also don't matter. Again, it's not something one would want to game in, but for desktop, it's fine. And way way easier than the usual hard realtime 11 ms per frame.
Not that they have a camera. And I've no idea what their latency is. I'm just observing that by not aiming at the gaming market, they might address points in market space that are largely neglected at present.
My immediate thought went to seeing if a VR headset could be created with a 200+ degree field of view similar to a Pimax or StarVR using two of these displays!
Hi, how did you guess? Me and my friends were completely in love with SAO, and we decided to build a virtual world to go after (or instead) school. But we ended up building a VR headset.
It's pretty great (and unusual) that a TV show would result in doing something productive! They can be inspirational and give ideas. How did you go from watching the show to starting to build a VR headset? Was there any particular trigger at the start?
Because there is no software platform for driving it? There should be a serial usb / xml interchange format for sending the device capabilities to the driving PC. Something HDMI device-id but for VR displays.
Cool project by teenager kids shame that its useless in practice since thered be no sw support from major engines. So as a next project youd need to make an engine plugin for ue4 for example.
All the "useful" stuff you're thinking of very likely started out much less polished / refined than this project. It seems to have struck a chord with this audience, and it will only get better from here.
Oh, one note, in your read me, the part about Jonas convincing Chinese factories to sell you parts at premium prices should be changed. You probably meant he got really good prices, but premium pricing means basically the opposite.