Hacker News new | past | comments | ask | show | jobs | submit login
Relativ – A VR headset that you can build yourself for $100 (github.com/relativty)
449 points by realusername on Jan 20, 2018 | hide | past | favorite | 84 comments



I was expecting this to be another phone holder, much more interesting that you hacked together some hardware. Does this feature a low persistence display? Also, you mention "cheap tracking for next big update", is this just going to be an improvement over your current tracking or a full 6 DOF tracking? I don't think I've seen any hobbyist 6 DOF tracking for VR yet.

Oh, one note, in your read me, the part about Jonas convincing Chinese factories to sell you parts at premium prices should be changed. You probably meant he got really good prices, but premium pricing means basically the opposite.


> I don't think I've seen any hobbyist 6 DOF tracking for VR yet.

Webcam based hobbyist 6dof headtracking for use with desktop screens has been around for many years (freetrack, ftnoir/opentrack and so on), but the quality is rather dreadful. Still, people who don't mind glacial latency from very heavy smoothing have been very happy with those solutions.

But VR requires so much headmounted technology that tradeoffs between cost/weight and quality shift a lot. For desktop tracking, adding head mounted sensors to the existing single camera 6dof tracking solutions would at least double the amount of hardware involved. But when your baseline is a full VR headset, those sensors are an almost negligible extension. Gyro sensors and/or an "inside out" camera could easily add a lot of precision/speed (effectively the same metric, with filtering) to the rotary axes of single cam 6dof. Last time I looked at opentrack it already supported some sort of fusion between stationary camera and Android gyros. This would be a good starting point for a hobbyist VR rig (not room scale).


The opposite word he probably meant is discount price.


“I know this request is a little bit out of the ordinary, but would it be possible for you to charge us premium prices for this item? We’d like to pay more. No? Please, we’ll only buy it if you charge us more. Ok, thanks!”


We see DIY VR a lot within the STEAM educational programs.

Focusing this type of educational attention on children at an early age is critical for future personal and professional development.

This is what allows consumers to have a build yourself VR headset for $100.

I have covered a few early STEAM programs in the Los Angeles area.

https://latechnews.org/raymond-ealy-founder-steamcoders/ https://latechnews.org/stem3-academy-open-house-november-4/


About early age children using VR, there was a warning of potential risk for children using VR, two/three years ago: https://uploadvr.com/study-vr-children/

it’s not a big study nor does it comes to any concrete results, so I was wondering if you knew of more data on the subject, or knew of any real word feedback on the matter.


From what I've read (which isn't hard to find, but I'm mobile right now), the danger of VR for kids is not an issue, and in fact can actually more quickly bring to attention if they have a biological issue (like more quickly reveal if glasses would benefit them).


Kids these days have all the fun. In school They taught us how to animate a 2d box across the screen.

This looks like a real fun project.

If you belong to it, then great work!


I don't know them, I just found that project by chance and thought I would share it on HN since it looks cool!


Hi, I'm Maxime. This is really nice to share it! Thanks.


Félicitations pour votre classement HN petits veinards ! :) C'est quoi la prochaine étape du projet pour vous ? Vous voulez transformer ça en job ?


Looks very interesting. All the luck getting your project (dreams) forward.


Motion-to-photon latency isn't mentioned. It's basically the most important characteristic of a good VR set. That's why all smartphone-based VR solutions suck and make you sick.

Carmack has long lectures about that issue.


> Motion-to-photon latency [...] most important [...] sick.

Yes and no. And the "no" seems underappreciated.

I normally run my Vive and Lenovo WMR at 30 fps on an old laptop with Intel integrated graphics. So why hasn't it made people sick? Camera passthrough AR helps. Likely the "comfort mode"-like tunnel-vision effect of not doing barrel or chromatic aberration correction. Perhaps not doing predictive tracking, so lag but no judder. Maybe "visible out the corner of your eye" framing. Maybe something else.

Most VR reporting starts from an assumption of games. Games, games, always games. So "you are there" immersion, with no avoidable visible artifacts, no AR, etc. So 90 fps, constant latency, high GPU and HMD bandwidth demands. But if you don't care about games, if you just want a desktop replacement/alternative... the design constraint space looks very different.

But yes, their focus seems Unity and games.


I'm looking for something that could replace big desktop monitors and annoyingly unportable laptops when I'm on the go. A HMD (or VR headset) could fit the bill. But I want to use it as a generic display device (that e.g. presents as a large canvas floating in space) rather than a VR-specific gaming or movie gimmick. Unfortunately nearly all the discussion I find online is about gaming. So I have no idea what to expect if I were to buy a headset and plug it into my laptop running linux with intel graphics..

It sounds like you might know something about the desktop experience. Do you have any thoughts, or useful links?


> looking for something that could replace big desktop monitors and annoyingly unportable laptops

Tl;dr: wait for Xmas.

A current design point: gaming laptop; Windows; SteamVR; a virtual desktop like http://bigscreenvr.com/ ; HDMI dummies as mentioned by @sgtmas2006, so Windows thinks it has more monitors; RDP/VNC/etc to a linux VM. All off the shelf.

But gaming laptops are likely "annoyingly unportable laptops". If you can segregate when you want portable vs monitors, something like a Lenovo X1 Carbon is portable, and has Thunderbolt 3, so you can plug in a rather less portable external gpu enclosure.

Current HMD resolution is still quite low. If you use big desktop monitors for their resolution (eg, lots of text), rather than merely their size (eg, vision impairment), that's a problem. If the big monitors are full of little terminal windows (eg, ops), I saw a report of someone being happy.

https://varjo.com/ , despite its website, appears on track[1] to release a much higher resolution HMD this year. For only "under $10k". With that, you could just render your big monitors. Current software support is unclear.

So a takeaway could be "wait for Xmas 2018".

> what to expect if I were to buy a headset and plug it into my laptop running linux

The HMD shows up as a normal monitor. (In future, that may require telling `xrandr` "yes, I know it's an HMD, just treat it as a normal monitor please".) From there, it's like google cardboard. There's a border region you can't see, and each eye gets half the screen. If you close one eye, you can just drag/open windows and use them normally. I open a full-screen browser window, and render stereo for 3D. The pixels are magnified, and can easily be seen individually.

As for tracking and controllers... attention to linux largely evaporated when VR transitioned from niche to chasing the Windows gaming market. I know of nothing usable with Windows MR HMDs. (There was/is? a project to unpack their odd camera format, and then one might run say ORB-SLAM2 for tracking, but that's all diy). And doesn't gget you Windows MR controllers. For Vive, there's SteamVR and WebVR, but I believe both still require major setup effort, with buggy results. http://idav.ucdavis.edu/~okreylos/ResDev/Vrui/ exists. As do low-level Vive device drivers.

> ... with intel graphics

Ah, that is very not mainstream. I know of only my own stack[2] easily available. And that's for Vive tracking. Tracking Windows MR HMDs then requires taping on a Vive controller, or diy software dev.

So unless your usage patterns happen to mesh nicely with rather severe current tech constraints... using VR to replace big monitors isn't quite ripe yet. Maybe by Xmas. But once that threshold is passed, I expect a rapid and large impact.

[1] https://www.anandtech.com/show/12102/varjo-announces-shippin... [2] https://github.com/mncharity/node-webvr-alt-stack


Thanks for all the info! :)


A 16 year old and her classmates built this. I'm sure your comment has technical merit, but man, give these kids some props!


I'm not saying it's not a cool achievement, but they are comparing their product with the existing products. I would like to know how it stacks up. Maybe it's even better, who knows.


Yeah, but you're also still not saying it _is_ a cool achievement. It is. Period.

Why not submit an issue or get involved with the project? It seems like you have expertise in the area that they would appreciate.


His comment was informative, yours argumentative.

I'm sure everyone agrees this is a big achievement for these students, move along.


What argument is this?

Not everyone has the time to volunteer and not every comment has to say it is a cool project.


Agree with props! nit: Maxime is a boy's name, so his mates.


Are you sure that I'm a boy? ;)


My apologies if I offended, I'm not sure where I got the impression, I thought it was mentioned somewhere.


en français on peut rester garçon toute le vie, non? en tout cas quand on parle entre copains.


Ce que je voulais dire, c'est es-tu sure que je ne suis pas une fille ?


Bien sûr, le nom, la photo, sans la moindre hésitation...

Allez, développez votre création et votre histoire à long terme, c'est ça qui compte!


I've known a baby girl named Maxime (in France) Badass name for a girl.


What in a smartphone based VR solution would give to it an inherently slow Motion to photon latency ? The communication between CPU and motion capture ? The 3d rendering ?


Software stack, buffers, buffers, buffers, software stack.


> That's why all smartphone-based VR solutions suck and make you sick.

Well, GearVR and Daydream suck but at least don't make you sick if you stay sit and avoid apps with artificial movement.


>the chief architect at Oculus, Atman Brinstock... gave me a precious piece of advice: "open source it".

If only he followed his own advice.


Well to be fair, he doesn't own the IP he creates at Oculus, Facebook does. It is entirely possible he'd like to open source it, personally (though I have no idea if this is the case).


What would you have him open source? His employers property?


As chief architect he probably has some sway to push FB to open source it.

Even if he can't its pretty hypocritical for him to ask these people to open source their work, allowing Occulus to benefit from it rather than worry about them as competition.


Is it hypocritical to encourage someone to open-source some IP that they own, while not being allowed to open-source some IP that you don't?

It seems like a chief architect would have a lot of pull in technical decisions, but a lot less in business or legal decisions. Open-sourcing the code would have to be considered from all three perspectives.



This is really cool -- also great to see a teacher having an impact in this manner. Awesome!


Here's one teacher who doesn't have to doubt if his work is meaningful, awesome!


This should carry a trigger warning for imposter syndrome.


We're all impostors, so what? I sometimes feel that this basic insecurity in IT really creates strife and abrasiveness between people instead of mutual respect. I find it better to embrace your own imperfections and in this way find your own strengths, which, sadly, are too often shadowed by the need to keep up appearances. There is no perfect IT person. We all suck. All tech sucks. People in general suck at things people do. This is life, and I am ok with it: I am content with who and where I am, and I am ok with other people having different goals, different life experiences and different achievements. My friend works at NASA; I've tried a lot of psychedelics; somebody has been backpacking around the world with 5$ in their pocket - and we're all deserving to be allowed (by ourselves) to be happy.

That having been said, these kids are really cool and I wish them the best of luck!

*edit: punct.


Without knowing anything about the quality I can say, this is pretty amazing. I mean putting together a team which builds hardware and software for what they want to have.

Cool kids :-)


Sometimes I would love to go through the black markets of Shenzen and pickup the parts for projects like these


This is really cool.

Quick question for any experts reading this - do Oculus or VIVE use any sort of dead reckoning/movement prediction in their tracking? Also does anyone have any documentation on the APIs for this, or information on how the devices keep track of their latency and calibration information?

There are fundamental limits on latency, especially with spread-spectrum transmission when all of this goes wireless. As accurate as the tracking and pointing are for controllers, I feel like some additional extrapolation is happening. It would be great to have an open source library for this so we can give hand-built rigs the best tracking that's mathematically possible.


>Quick question for any experts reading this - do Oculus or VIVE use any sort of dead reckoning/movement prediction in their tracking? Also does anyone have any documentation on the APIs for this, or information on how the devices keep track of their latency and calibration information?

Absolutely. Vive uses a combination of IMU based dead reckoning combined with Lighthouse sensors to provide tracking. The dead reckoning is super important for maintaining tracking during sensor occlusion. The API it interfaces with is SteamVR, which is mostly open source, so you can even see how they’re doing it. The new generation Vive Pro will combine this along with stereo camera CV based inside out tracking for even better precision.


Do you have a source on the stereo cameras of the Vive Pro being used for inside out tracking?


This is pure speculation on my part, but it’s the only concievable use for the cameras. They are laid out in the exact same way as the Samsung Odyssey headset which does that. I cant imagine they have solved the compositing issues involved with doing pass through AR yet, although I’d be impressed if that’s the case.


> They are laid out in the exact same way as the Samsung Odyssey headset

Look again. The Odyssey's cameras sit below eye level and point down & sideways, which makes sense for tracking:

https://www.samsung.com/us/computing/hmd/windows-mixed-reali...

The Vive Pro cameras sit at eye level, at average interpupillary distance, and point straight ahead:

http://s3.amazonaws.com/digitaltrends-uploads-prod/2018/01/h...

That makes most sense for pass-through AR.


Yes it uses both, relative and absolute measurements (each with its own drawbacks) into what's usually called sensor fusion. It's very well explained here: http://doc-ok.org/?p=1478

The headset is not wireless and the controllers have 1-2 ms of latency. They compress the controller data a lot in a very smart way. More info: https://hackaday.com/2016/12/12/cnlohr-reverses-vive-valve-e...


I know the Vive does some kind of motion prediction for its controllers, at least in the case they lose tracking: if you quickly move a controller out of view of the lighthouses (kind of hard to do if you have the lighthouses set up well; I had to hide the controller under my shirt) then the system will show the controller continuing to move in the direction it was moving for a short bit.


You can chat with Fellowship of this project! https://discord.gg/W9VKbjU


I wonder if you could replace the Arduino with a battery-powered Steam Link without the headset being too heavy.


You still need a cable for the screen per and HDMI.


A cable between the screen's board and the Steam Link inside the headset.


Cool.

Can't wait for the day when we truly have modular VR.

It's going to take a few years and I know Oculus has the right idea with their eco system but it sort of bums me out that the Vive didn't end up being the hackers headset.

Today it feels like the Vive was built out of spite and HTC got lucky Valve went them first.


I think the less cynical answer is that HTC often has good ideas for physical devices but then often fails to follow through and iterate well. They're also just struggling as a company in general.

But damn, so I 1000% agree with being bummed about it not being the hackers headset. I preordered the Vive because of a VR video of a guy programming the environment he was in at the moment: https://www.youtube.com/watch?v=db-7J5OaSag

I would love to use a VR IDE some day.


> I would love to use a VR IDE some day.

The LCD Windows MR HMDs seem pretty close to being a sufficient display. The useful visual area is something like 900^2 px. Big, visible pixels, that are ok with a 7 pt font. And one can do subpixel rendering, so ~3x the horizontal resolution. If you are ok with working on a small laptop screen, you might be ok with this.

Otherwise, there's Varjo[1] later this year. Similar resolution to looking at your laptop. ~55 px/deg. For "under $10k".

For gloves, that you can still type in... we'll see. I had hope for https://senso.me/ , but they've gone quiet. If you don't mind spending $10k, there are existing trackers.

For software... sigh. Maybe if market size explodes this Xmas, things will improve. There's been a lot of "do something, then abandon it, because the area isn't ripe yet" over the last half decade. And software dev in VR hasn't been where people's attention is focused.

Oh, if one's interest is in-VR creation of VR, instead of in-VR general software dev, then there are a bunch of "authoring environments" being worked on.

[1] https://www.anandtech.com/show/12102/varjo-announces-shippin...


I thought that their products were available to order now...


> I thought that their products were available to order now...

Senso.me? https://senso.me/order has said "Estimated shipment date: December 2017" for long time, and blog hasn't been updated since Sept.

Varjo? Last I saw, Alpha version available to partners. Though Beta was expected Q1?


Reminds me of Second Life, in which the tools to model, edit, and script the 3D environment were all integrated into the environment itself (and realtime-synchronized over the network, no less).


There's OSVR, which is about as modular as you can get - there are plugins to interface with e.g. SteamVR, and it's compatible with VRPN for peripherals. (Very much a dev kit though, if you're looking for something you can just plug in and use the OSVR headsets are absolutely not that.)

I have to wonder if this team could have gotten more mileage out of working with OSVR, since a lot of the work to connect it with existing VR apps is already done. But there's certainly value in doing it all yourself!


Modular but not adopted, that's the real issue. The Vive looked like that happy medium.


Imagine this with a 4k panel. Depending on the lenses, it could be the highest resolution HMD currently available.

Panelook seems down, but even if only 4K@30 5.5 panels are currently available/affordable... well, no gaming, but I use Vive and WMD at 30 fps as a desktop alternative.


I'd be interested in how your desktop alternative works with the VR displays.

Do you edit text, code, email and surf inside the helmet?

Do you have any issues with keyboard and mouse?

Do you sit in a desk chair?


I've done this in the past. I had 3 false displays set up using virtual desktop and HDMI dummies. When I got used to it, it was very pleasant. One day I sat down, did some working, and then watched some shows. I actually fell asleep with the headset on once. It was, to say the least, confusing when I first woke up in outer space.

It was very pleasant. I wish I had a Rift to try it with, the lenses on the Vive are rough, with my only experience before the Vive being the DK2. http://www.roadtovr.com/wp-content/uploads/2015/05/wearality... This effect was highly noticeable for me.

Never had an issue with my mouse or KB. I went onto the app where you were in a living room, with other people in the online room around you, and you each had your desktop in your lap. It was pretty cool to play Overwatch and this guy comment when something sick happened.

I don't believe VR belongs in the gaming scene. Business / EDU all the way. I'd be curious if there's any plans to research trying to get kids to develop synesthesia using VR.

I do also sit at a desk chair. I ended up stopping using it a lot unless I'm trying to drown out everything around me because I am very paranoid if I can't see all exits/entrances to a room with my back to the wall. Still probably the most eye-opening experience for VR for me. Being able to come into my office, with only my desk in a part of the room, sit there with my KB&M and virtual displays, and then move around my room separately for things like Tilt Brush would be amazing.


> I'd be interested in how your desktop alternative works with the VR displays.

"your desktop alternative" -> collection of crufty exploratory kludges. :)

I'm running custom stacks on linux and X. Browser as compositor, and three.js. React. Tracking from low-level Vive lighthouse driver[1], or laptop webcam optical, or none. Camera passthrough AR.

Most recently, I'd just plug a Lenovo WMR HMD with a duct-taped-on camera into an old laptop with integrated graphics; run a browser full screen on the HMD; run xpra to put emacs and xterm on laptop and HMD; with the camera AR in background; and sometimes track head motion using the laptop webcam and yellow duct-tape HMD marks. Boring and crufty. Though emacs looks kind of "hip" with text changing depth.

> Do you edit text, code, email and surf

Desktop is just xpra[2]. A remote desktop that pulls in individual X programs. No "plug in a null display device" Microsoft silliness. Text is ok. Video is low fps (though I've not tried to improve it). I'd not want to surf in a such small window - think a 900 px square.

Because of resolution (and budget) limits, the UI is more 2D on sphere than 3D. Just picture normal desktop windows. Vive resolution was unusably low and PenTile. LCD Windows MR resolution is tolerable with individual pixel control (thus the 2D). 3D might be ok with subpixel rendering, but I don't yet have a laptop with a dGPU, so I've been putting it off. Given 2D, I'm still just using xpra's kb/mouse handling. Bits of a React-and-three.js approach. For hand tracking, leapmotion is unusable, my finger tracking with fiducial markers is currently too slow, and gloves with IMU fingers are still like $4k+. So I'm basically just doing exploratory spikes, waiting on late-2018 hardware availability and prices.

> desk chair?

Desk chair, conference room, classroom, subway. All sitting with laptop keyboard. I've explored room-scale UI before, but for this, just emacs and xterm in space. Not even in space, just in your face. I'm tired of burning life fighting ephemeral display and input limits. I'd like to do software dev and collaborative compilation and category theoretic type systems in 3D. But I'm going to wait for the needed hardware, rather than struggling against the glacial pace of tech progress.

[1] https://github.com/mncharity/node-webvr-alt-stack [2] https://xpra.org/


Wow, thanks for this response. It sounds like you should go for a phd in VR/AR non-game HCI. I like that you added AR or something like it, do the passthrough cameras do edge detection with motion compensated infill (more pixels come through the closer/faster they move) so one doesn't feel cut off from the outside world?

Too bad the IMU finger gloves are so expensive, doesn't make sense since the sensors are only $9 each qty 1. Using gloves like these [0] along with a HMD that had integrated wide field cameras could have great finger tracking support.

[0] http://news.mit.edu/2010/gesture-computing-0520


> AR or something like it

Just simple passthrough video for now - no vision. And mono, in a bid to reduce eye strain from long hours of use.

> do the passthrough cameras do edge detection with motion compensated infill (more pixels come through the closer/faster they move) so one doesn't feel cut off from the outside world?

Do you mean making "big vision-occluding windows" more transparent when the world behind them changes and/or head spins?

> expensive, doesn't make sense

Existing small high-end market; immature big low-end market; limited ability to do market segmentation; no HomebrewComputerClub-like market-bypass; lack of incentives to avoid collateral damage to rate of progress.

My hope is Xmas 2018 will see both finger and eye tracking get products priced for consumers.

> [color] glove

Yeah. Sigh. They did a startup... and were bought by Oculus. I don't know of an open source release. So here we are, literally a decade later, and you can't easily get one.

It's been interesting to watch VR's widespread innovate-startup-acquisition-unavailable dynamic, and contrast it with say the ferment of HCC. It attracts investment, but devastates the market ecosystem, and cripples research.


im surprised they even sell such panels. 4k would be very sweet but not sure how they address the latency issue with this diy kit, its a great start


Camera passthrough AR seems to greatly relax latency constraints. Like down to 30 fps, so 33 ms per frame. And as long as the AR video is visible in background (3D objects small or transparent and/or with clipped fov), the 3D rendering can have a lot of lag. Like multiple 100 ms. More like head as mouse - point it somewhere, and watch the 3D overlayed objects stutter into place. So latency outliers also don't matter. Again, it's not something one would want to game in, but for desktop, it's fine. And way way easier than the usual hard realtime 11 ms per frame.

Not that they have a camera. And I've no idea what their latency is. I'm just observing that by not aiming at the gaming market, they might address points in market space that are largely neglected at present.


This is incredible - well done guys.

My immediate thought went to seeing if a VR headset could be created with a 200+ degree field of view similar to a Pimax or StarVR using two of these displays!


Inspired by watching SAO? I wonder if they prefer Asuna or Lisbeth.


Hi, how did you guess? Me and my friends were completely in love with SAO, and we decided to build a virtual world to go after (or instead) school. But we ended up building a VR headset.


It's pretty great (and unusual) that a TV show would result in doing something productive! They can be inspirational and give ideas. How did you go from watching the show to starting to build a VR headset? Was there any particular trigger at the start?


I wasn't asking if it was inspired by SAO, I saw the comment in the GitHub readme about it.


Strange to me this price point isn't being met by Alibaba/Banggood


Because there is no software platform for driving it? There should be a serial usb / xml interchange format for sending the device capabilities to the driving PC. Something HDMI device-id but for VR displays.


Cool project by teenager kids shame that its useless in practice since thered be no sw support from major engines. So as a next project youd need to make an engine plugin for ue4 for example.


All the "useful" stuff you're thinking of very likely started out much less polished / refined than this project. It seems to have struck a chord with this audience, and it will only get better from here.


I dont think this particular project will go anywhere really. But these kids will probably gain something for this for their future careers.


Or much simpler: make it OpenVR or OSVR compatible. With any of the two it will work with the vast majority of games available.


Or maybe get it working smoothly with WebVR compatible browsers ?


Just use OpenHMD.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: