But I've heard and seen so little use in any industries. I would have thought at a minimum that having access to hands-free information retrieval (e.g. blueprints, instructions, notes, etc), video chat and calls for point-of-view sharing, etc would be quite useful for a number of industries. There do seem to be interesting pilot trials involving Hololens in US defense (IVAS) as well as healthcare telemonitoring in Serbia.
Do you know of any relevant examples or use cases, or are you a user yourself? What do you think are the hurdles - actual usefulness, display quality, cost, something else?
The biggest hurdles is that none of the large companies think there is enough profit to be made from AR. The Hololens 2 is the only headset on the market both capable of running the program required while also being safe to use in a active shop enviroment (VR with passthrough is not suitable). Unfortunately the Hololens 2 is almost 6 years old as is being stretched to the absolute limits of its hardware capabilities. The technology is good but feels like it is only 90% of the way to where it needs to be. Even a simple revision with double the RAM and faster more power efficient processor would alleviate many of the issues we've experienced.
Ultimately from what I've seen, AR is about making the human user better at their job and there are tons of industries where it could have many applications, but tech companies don't actually want to make things that could be directly useful to people that work with their hands, so instead we will just continue to toss more money at AI hoping to make ourselves obsolete.
Like, real work modifying sketches and dragging points around in 3D.
When VR first came out (well, around the time HTC Vive was first launched) I searched madly for something to do this but all the apps felt like toys.
Have you used something you would recommend toward this end, on the design portion of things?
Right now we are just using it for special projects that are complex and have little margin for error. We'd like to be able to use it for everything but that isn't feasible with where the tech is currently stuck at.
Quick question about your use case - is the 3D overlay really that important, or would you get most of the value simply seeing the blueprints in your heads-up display, maybe doing a quick finger swipe or voice command to switch between pages/images?
Then your QC guys are mostly behind computers and rotated to the floor when things are identified.
Ultimately, your VR isn't doing anything more technically accurate than this.
I am curious, what size of clients are you working with and how many contracts has it realistically turned into?
I also believe proper AR hardware/software can revolutionize the QA and inspections industry.
What I am noticing is a chicken/egg problem where companies want proof it works, while also reluctant to put their money where their mouth is and invest in the R&D. Which then leads to Microsoft and similar refusing to fully invest in new AR tech.
As such, it all stays mostly in experimental and drawing board land, never quite fully reaching the market.
Thoughts?
QA is the big sales point of the software we are using, but there are many other potential applications for the same product. It should be possible to overlay the model on the main assembly prefab then use that to quickly mark where holes should be drilled and additional pieces attached. The other potential application that is being explored is using the holographic overlays to construct things out of the usual order, instead of building part 1 then starting part 2 since it needs to be built to conform to the first part you can instead build around the hologram so that your not relying on the previously built parts to ensure your angles are correct.
I agree about the chicken/egg problem. Its an emerging technology where the payoff might be a decade away, customers need software that will actually benefit them, developers need reliable hardware capable of running software that has practical uses, and hardware companies want to know there is a customer base. The issue is AR falls under the category of product that the customer does not know they actually want, so the only way it is going to be developed is if one of the hardware manufactures takes a leap of faith and makes the long term investment. Sadly, I feel like AR is a million dollar idea with practical uses that has to contend with a business climate where you can make billions making some doodad that collects private data then displays ads to the masses.
Companies have put billions into R&D, but still haven't delivered a product that surpasses the hurdle rate.
The world in which capitalism has taken hold is not one that produces incrementally better products for niche markets.
You end up mostly with passionate people improving niche markets, and if it involves hardware, we're just at the beginning of small time custom hardware makers to make a dent into this type of market need.
>some combination of way too heavy, expensive, fragile, short battery life, no wifi connectivity, too much UI long to get to point of value and/or simply not useful
Was the screen quality, resolution, visibility in brightness, etc also one of these limiting factors? Or would you say screen quality has gotten reasonable by now?
>The AR/VR use in the field typically came down to looking something up in a manual or calling someone.
That's good to hear as someone interested in the field, I've been skeptical of the fidelity and utility of the fancy augmented 3D overlays.
Ah I see you realized something similar: >The cool AR 3-D demos or overlays rarely worked in the field on real equip or didn't actually convey anything useful (everyone knows the basics of how the machine works).
>Both easily and perhaps more effectively done on a smartphone.
Surely there are some use cases where hands-free operation would be a game changer, but I don't know enough about potential industries where this would be the case.
>The use case we're currently working on is inspections or filling out forms with audio/videos.
That's pretty interesting, do you even need a screen, or just voice? I would think a pretty quick-and-dirty way to do it is to take pdf forms, enumerate (put small numbers) next to every editable field, and then use voice commands like, "write the following in field 3: ...." The purpose of having a screen would be to verify what the LLM + voice is inputting in the form. Then at the end you can tell it to save/submit or whatever.
I use VR for gaming. The headsets are uncomfortable after about 45 minutes, they're hot and sweaty, and they're incredibly isolating. All that's fine if you want to slay baddies while alone at home, but utterly propellant to most people.
I saw it couple of time in action, it's really impressive.
- Viture Pro XR glasses
- Vuzix Z100 glasses (through Mentra)
The Viture's I use as a lightweight alternative to VR headsets like the Meta Quest. I lay down on the couch/in bed and watch videos while wearing them.
The Vuzix are meant to be daily-wear glasses with a HUD, have yet to break them in.
Later this year, Google/Samsung are due big AR releases, so is Meta I think as well.
It'll be the debut of Android XR.
Which is saying something. My second most ergonomic situation is a 120" screen with two 55" screens tilting in from both sides (both in portrait, so all 3 screen verticals line up). All wall mounted. I started wall mounting to get screens off my desk, at which point it was clear bigger was always better.
But for the Vision it took many rounds of trying third party head gear, and customization, before I could wear it comfortably for unlimited time. I just kept trying things until I got there.
I am an obsessive optimist when it comes to ergonomics. Once the Vision is ergonomic for one's head, then it becomes a super ergonomic solution over all. The screen can be wherever you need it for best neck, back and body posture, whether at a desk, couch, (non-driving) car seat, or in bed. And a very wide screen beats any screen patchwork. Although I would like the Vision even more if I could have more than one Mac driven screen when I wanted to. (Recommend expansion batteries that clip on the original, and round magnetic USB-C cable adapters, for more spontaneous mobility.)
I like the standard Apple straps in a pinch. But my face needs a serious break from the weight they distribute on it, every 30-120 minutes.
It's good enough for watching videos, but for working and reading text, I personally haven't used a device with high enough text quality to prevent eye strain.
I'm very bullish on AR though, and I'm willing to bet that consumer grade devices which are genuinely comfortable to work in will become available within the next 2-3 years.
To me, AR is the next step in Human-Computer Interaction while we wait for full BCI (Brain-Computer Interface) devices.
Happy to be proven wrong obviously but so far that's my outlook.
Worked great to avoid eye fatigue/posture issues on airplanes though. I'm happy I have them, but in hindsight I'd have gotten a Viture or something with a better nose bridge and a narrower field of view.
https://loop.equinor.com/en/stories/shaping-the-future-with-...
https://loop.equinor.com/en/stories/developers-trip-johan-sv...
We post case studies regularly on our blog, so you can read about real world deployments there: blog.resolvebim.com
From my experience the hardware is still a hurdle simply because it doesn’t completely replace all pc based workflows right now and therefore has to be used selectively at the right moments alongside 2D monitors.
From your company's landing page, I saw the video and it looks like you're working with in-office project managers and similar white-collar types.
Do you work with any products in the field, like on the job sites? Is that something that would be interesting or valuable? Some examples: letting workers be able to quickly share first-person recorded videos of issues, first-person video chat with supervisors, ability to pull up blueprints and instructions in their heads-up displays, etc? Assuming perhaps a different platform than the Meta, as I don't think fully covered VR would be appropriate for a worksite.
You can see in that video that you can markup the site virtually and yes you can record video, leave issue markers, pull up 2D plans from other tools we integrate with like Procore, ACC, etc. However, it still is primarily a stationary tool on site because of the field of view limitations.
There are some rumors about next gen MR headsets allowing for a "full field of view" by basically removing the head gasket altogether. We'll see.
Hurdles? Battery life, proper hardening against dust/water.
Its the opposite- surround projection, so you can put a group in a room into a scenario.
Tips: Get an upgraded headband, silicone face cover and carrying case. Use a physical USB connection for lower latency (& turn off the wifi/bluetooth as it's no longer needed). Bring a cloth for keeping the lenses clean. Bump the Retina quality up in the Immersed client (on Mac). Lower resolutions (1920x1080) are more comfortable to use over longer periods. Use a travel neck pillow for reclined usage. Bring replacement batteries and a charger for the controllers.
- Does your neck get tired?
- Do you ever have to be on video calls? I can't talk to clients looking like a spaceman
It's not actually the weight. I have a Quest 3 with a BoboVR head strap, external battery, etc that all add up to be heavier than the AVP, but I can easily go for multi-hour social sessions with that on without any physical discomfort. You can put a ton of weight on your head with perfect comfort as long as it's balanced properly.
The AVP's real problem is that its ergonomics are just shit. As with a bunch of other things, they designed for the ads instead of actual usability, so it's significantly worse than headsets that are actually much heavier, and the earband design with the way-too-far-back connectors and no top connections makes it nigh impossible for third parties to improve on it.
The closest thing I've seen to making it comfortable is the third-party ResMed Kontor headstrap, and that's being produced in such low numbers that it's functionally impossible to actually buy.
I could see a frequent traveler using an AVP as a "full setup" on the go. In my experience, I can get away with most with a MacBook. Some projects really benefit from the extra screen real estate (and a mechanical keyboard.)
I can get any task done with my laptop. But not a full day's work. And if I want to travel while I work (which I would like to do) then I need a better solution. This is why I'm looking into VR and also a 4K projector, but a projector would have to be able to be seen in a bright environment, and I don't know what the current state of projectors is.
Looks like 3k lumens is your maximum. https://www.google.com/search?q=3000+lumen+projector+in+dayl...
We have another product that's geared towards collaborating and sharing data between teams and vendors, and it seems better suited there, but that one is a web application, and I don't know how well VR glasses are supported there.
I think it'd be awesome in the CAD applications themselves but I don't know if any of them support it out of the box.
At the end of the day, you are asking someone to put something on their face that is still very different ergonomically than glasses (and I’m not sure even glasses would overcome enough friction). The ROI has to overcome the business (or personal) friction of buying the hardware, the friction of the form factor plus any friction from changed workflows.
Now put that in an operational workflow instead of training and the risks go up. Most are still skeptical of device reliability (not to say there aren’t suitable devices for operational roles but the perception is still a hurdle, and the applicability is often device-specific). Now add on to that limited experience with devices (many decision makers have never put one on), added security complications, specialized software development skills, limited content libraries and very real accessibility concerns and a lot of enterprises can never get past an “innovation center demo.”
For many industries the value proposition just isn’t there yet. But that said, I’d recommend digging a little deeper as there’s a lot of existing use-cases and deployments, both failed and successful, outside of IVAS.
Very curious, don't leave us hanging! Assuming it's not confidential.
They use the Apple Vision Pro headset fairly significantly in human interaction and data gathering that they then utilize for simulations.
I spent a lot of time in graduate school researching AR/VR technology (specifically regarding its utility as an accessibility tool) and learning about barriers to adoption.
In my opinion, there are three major hurdles preventing widespread adoption of this modality:
1. *Weight*: To achieve powerful computation like that of the HoloLens, you need powerful processing. The simplest solution to this is to put the processing in the device, which adds weight to it. The HoloLens 2 weighs approximately 566g (or 1.24lb), which is a LOT of weight compared to a pair of traditional glasses, which weigh approximately 20-50g. Speaking as someone who developed with the HL2 for a few years, all-day wear with the device is uncomfortable and untenable. The weight of the device HAS to be comfortable for all-day use, otherwise it hinders adoption.
2. *Battery*: Ironically, making the device smaller to accommodate all-day wear means that you're simultaneously reducing its battery life, which reduces its utility as an all-day wearable: any onboard battery must be smaller, and thus store less energy. This is a problematic trade-off: you don't want the device to weigh too much that people can't wear it, but you also don't want the device to weigh too little that it ceases to have function.
3. *Social Acceptability*: This is where I have some expertise, as it was the subject of my research. Simply put, if a wearer feels as though they stand out by wearing an XR device, they're hesitant to wear it at all when interacting with others. This means that an XR device must not be ostentatious, as the Apple Vision Pro, HoloLens, MagicLeap, and Google Glass all were.
In recent years, there have been a lot of strides in this space, but there's a long way to go.
Firstly, there is increasingly an understanding that the futuristic devices we see in sci-fi cannot be achieved with onboard computation (yet). That said, local, bidirectional, wireless streaming between a lightweight XR device (glasses) and a device with stronger processing power (a la smartphone) provides a potential weigh of offloading computation from the device itself, and simply displaying results onboard.
Secondly, Li+ battery tech continues to improve, and there are now [simple head-worn displays capable of rendering text and bitmaps](https://www.vuzix.com/products/z100-smart-glasses) with a battery life of an entire day. There is also active development work by the folks at [Mentra (YC W25)](https://www.ycombinator.com/companies/mentra) on highlighting these devices' utility, even with their limited processing power.
Lastly, with the first two developments combined, social acceptability is improving dramatically! There are lots of new head-worn displays emerging with varying levels of ability. There was the recent [Android XR keynote](https://www.youtube.com/watch?v=7nv1snJRCEI), which shows some impressive spatial awareness, as well as the [Mentra Live](https://mentra.glass/pages/live) (an open-source Meta Raybans clone). In terms of limited displays with social acceptability, there are the [Vuzix Z100](https://www.vuzix.com/products/z100-smart-glasses), and [Even Realities G1](https://www.evenrealities.com/g1), which can display basic information (that still has a lot of utility!).
As an owner of the Vuzix Z100 and a former developer in the XR space, the progress is slow, but steady. The rapid improvements in machine learning (specifically in STT, TTS, and image understanding) indirectly improve the AR space as well.
I mean- even the Sony Walkman started with audio streaming from a hip-mounted computer/power device, especially for work/industrial usage?
In my day job I occasionally hear about some AR startup doing demos for training and parts setup in CNC machines but the value add seems to be too insignificant for the work required.
VR is the zombie category that comes around every 10 years. All that's missing is another Lawnmower Man sequel.
Multiple companies have bought it, and we have large companies as clients who’ve used it to train 1000s of their blue-collar workers, even in sectors such as construction in a relatively challenging (in terms of pricing and value) market.
We have a significant (I think!) number of devices deployed, and most of my clients end up purchasing more after the initial purchase and pilot.
I think that’s for a few reasons:
1) VR, when well designed, can offer 1st person experience of being an accident victim due to the viewer’s own oversight / someone else’s oversight. That makes it a far more effective way to draw the learner’s attention to the importance of the safety protocols, etc.
2) Our solution is multi-lingual: it’s currently available in 10 regional Indian languages - that matters, since a significant fraction of the workforce may not understand English. Our localisation extends beyond that, but language is a big thing in enabling access and usage.
3) if you have to invest 10-15 min per learner (often one-on-one as the instructor) to onboard each learner before they can use your solution, it becomes very difficult to scale and raises the bar for cost-effectiveness. So that’s something we focus on heavily.
4) Setup time- don’t create a solution that requires IT support, someone who understands how to setup / load SteamVR / Oculus Link / meta Horizon. If the solution adds 20-30 min workload to the staff on a site, then adopting it becomes that much more painful - so we’ve worked very hard to develop an integrated system where the instructor can quickly onboard 10-15 learners and get going with the session in 5 min.
5) workflow changes: often, introducing VR means changing some part of the organisation’s workflow - many VR solutions don’t consider this / acknowledge this in their design- clients get initially excited, but when it comes to actually using it on a daily basis, the deployment and workflow frictions can completely tank VR adoption.
I’ve seen multiple solutions fail because of this, and we focus extensively on this when we design our solutions.
India is a hard market for VR, honestly because it’s very price sensitive. But I think we’ve made some progress here, because we’ve focused extensively on system robustness, ease of deployment, localization, and a lot of user-centered design.
We’ve also developed sophisticated VR - based training solutions for SOP training. VR can be / is, very effective for initial onboarding and SOP training. Again, the challenge here is usability - most of the learner’s don’t know how to use the controllers. Learning how to use the controllers is not easy and takes time. So that onboarding is critical and needs to be done well.
In SOP training, our experience is that it can, if designed well, significantly reduce on-boarding time; however, you still need the last 20% of training on the actual thing for it to stick, and for the learner to actually _learn_.
Edit: formatting and word choice
First off, most solutions work poorly out in day light - especially the bright Indian sun. So that automatically adds friction in terms of deployment opportunities / field deployment.
The second issue is the limited FoV: 40-45 degrees. That's a pretty small display area to play with in terms of pushing detailed information, etc.
Third, again, the usability, ruggedness and user-onboarding challenges.
So, the use case has to be important enough and significant enough that the user / organization needs to accept all these frictions and still derive enough value out of the solution - that leaves very few use cases.
Add to that, HoloLens is expensive, hasn’t seen any significant development in the past few years, and real wear type devices too aren’t cheap for large scale adoption - a smartphone / tablet in hand is often a better / more maintainable / cost effective solution even compared to real wear - I’ve seen clients securely mount a smartphone on their helmet and setup a teams call for remote viewing - it works!
We haven't been able to get a contract in nearly two years. Almost all of our competition in this sector have gone bust, and my company is about to follow suit.
The answer appears to be "no". The industry at large does not have enough interest in AR/XR to sustain any sort of competitive business to provide those products.
This is fascinating. What are your most used features?
> extended monitor
Do you also use a real monitor in the field of view?
That said, that might be because the thing that always stops me first is how front-heavy the damn thing is. I do wonder how GP deals with that.
He wouldn’t invest in Palantir either.
Convince the best seed fund in the world that it has a blind spot, maybe some risks will yield something great.
So ymmv