Apple Vision, Like a Pro

The new device is a massive qualitative leap for extended reality

7 min readApr 10, 2024
JigSpace for Apple Vision Pro

I will potentially save you some time by stating right away: this is not yet another product review for the Apple Vision Pro. Instead, it’s a brief reflection on something that even Apple seemed to neglect in their own launch communication: the “Pro” part.

For this purpose, my definition of professional use is: solving specific, complex problems for professionals in the real-world. Where state-of-the-art visualisation is actually a job requirement. Because there are many, which can now use the help of a capable extended reality (XR) device like this one.

Having seen many trends and technologies being born, die and get forgotten, I’ve long been standing on the skeptical camp when it comes to XR. It always seemed to me like a solution looking for a problem. A classic supply-push innovation, eagerly seeking elusive adoption.

And yet, Virtual Reality (VR) and Augmented Reality (AR) clung on stubbornly, throughout more than three decades, much thanks to a clever, pioneering, but very niche community. This could very well be changing now, as Apple joins the game as latecomer, but has the capacity to propel VR and AR into the mainstream.

More than an expensive gadget

No writing or visual documentation will be able to aptly describe an immersive experience: you need to try it yourself. Which is what I have been doing over time. And a few work-relevant capabilities stand out.

The Apple Vision Pro

The image quality is the first thing to notice, not just due to the insane resolution (23+ Megapixels), but the pixel density (nearly 3400 Pixels per inch), and high frame rate (100 Hz), packing a very sharp image for internally generated content. The interaction on the device (with eye-tracking and simple pinch gestures) is intuitive, minimising need for training. The spatial audio experience is also very good, greatly enhancing the experience.

On the downside, a few things will narrow down the use cases, for some. Pass-through video feed (because they are not actually “glasses”) is not at the same level as the internal graphics. The current ergonomics (with over 600g strapped to your head) can make this uncomfortable for extended use, unless you have the neck muscles of a formula 1 driver. Not to mention you need to carry a tethered battery pack, with an autonomy of about 3h, although these can be swapped. What is not meant to be swapped is the device’s user, since passing it around requires calibration, and if you need prescription lenses, these will need to be spec’ed individually.

Due to its qualities, we finally have a new category of device: one that can actually deliver value in real-world work situations, and which can be a far cheaper solution than the traditional alternatives.

Professional applications

Data visualisation

Looking at 3D data on two-dimensional screens may seem good enough, until you experience the benefits of looking at the same data in holographic, 3D space. It’s a mind-shift. At EGGS Design, a part of Sopra Steria, we have experience working in projects dealing with geological data analysis, putting the capabilities of existing holographic hardware at the service of teams who need to solve long-existing industry problems, improve workflows and save time.

The same can be said for most categories of digital twins, really. Many real-world scenarios (especially in design and engineering) can benefit from virtualised environments, connected to real-time, real data and models. It allows teams to test scenarios, discuss modifications and feed their learnings back into the system. There are credible use cases for products, factories, systems, and even city plans being developed right now.

3D imaging

The same possibilities can be applied to modelling and visualisation of 3D imagery. This is already being done successfully in the health sector. HoloCare for example, developed a tool for supporting a range of surgical applications, enabling doctors and surgeons to collaborate looking at high resolution 3D scans of a patient’s liver, to increase common understanding and improve the surgery planning process. This a well-functioning product using the HoloLens: HoloCare was awarded CE and UKCA certification, which meets compliance requirements for wider adoption in the medical industry.


Training and simulation

A sub-set of 3D imaging can be architectural spaces. Many teams around the world have been doing ground-breaking work using different technologies to solve this.

How do you train staff to familiarise themselves with a hospital which hasn’t been built yet? How do you show a technical expert who is at the other side of the planet, the problem you are experiencing in your vessel or power plant? How do you build knowledge and confidence among teams that need training in hazardous or hostile environments?

Simulated environments can be of huge benefit to this, dealing with the practicalities of remote testing of scenarios, providing valuable learning for teams. But it’s more than just environments: delicate topics might prove themselves useful to simulate for training, such as mental health patients, such as the project EGGS Design did together with Attensi for St. Olav’s University Hospital. The team behind Æira, who we helped design their launch product, also started in the VR world, to help medical students with a richer learning experience, motivating them through a notoriously tough curriculum.

Æira is a gamified learning platform for medical students


Speaking of hazardous or remote environments, there are many industries where on-site inspections are required by a human. Take fisheries, energy grids, shipping, or manufacturing, for example. A critical fault can inflict formidable costs. There are cases where these inspections require remotely located peers to advise on assessing the situation. However, second-hand diagnostics can be tricky via 2D images or live-feeds, when that is possible. A device like this can provide detailed, full-depth pictures that provide much richer visual information. Sometimes the extra level of certainty provided by high quality 3D images can be instrumental for a team or a client.

Akerblå: Remote control for better fish health

There are many high-end solutions for similar use cases today (caves, domes, and other immersive physical installations), but they’re expensive to install and maintain. In relative terms, — and the people working the simulation field know this — a device like this is a relative fraction of the cost. What is more: the experience it delivers may now be superior.

Command and control

In an age where semi-autonomous vehicles are becoming more common, there are cases where it can be very useful to operate these in a virtual or augmented environment. Operator awareness can now be significantly improved for tasks that are delicate, complex, and needing high resolution for effective operations. Until today, the reliability, complexity and criticality of many such use cases has meant that XR devices were either experimental or not credible. With such hardware coming to the market, this is changing: you can get reliable feeds, real-time 3D environments and responsive controls that are intuitive to use.

The DJI Avata: consumer products are successfully combining operation with VR headsets — Photo by Zac Gudakov on Unsplash

Control centres can also benefit from having AR capabilities, where real estate is virtually expanded and access to multiple screens can be curated for focus and attention — adding a different level of detail for shorter periods of time (search and rescue operations, for example).

The value added by collaboration, gamification and AI

Applicable to any of the use cases above are capabilities that can deliver true value, like the ability for teams to collaborate on given tasks and scenarios. We are already seeing this in projects where teams see a benefit from being able to explore the same 3D data visualised during work sessions.

Another real example is the ability to augment human vision through real-time interpretation assisted by machine vision: this can be useful for industrial settings, for inspection and repair scenarios, or even in medical practice, where we are seeing a growing number of reports documenting the exploration of AR+ML technologies in medical surgeries.

The Apple Vision Pro has the mainstream power which paves the way for a behavioural change that has held back new technologies before

None of these use cases are new, and some of the examples given provide evidence that this has already been explored via XR technologies. But the Apple Vision Pro marks a turning point. For one, the quality of the immersive experience is finally credible. And secondly, the Apple Vision Pro has the mainstream power which paves the way for a behavioural change which has held back new technologies before: it won’t be a “weird” thing to wear a headset once it becomes commonly used in certain contexts (the same thing happened with voice commands, until smart devices and vehicles commodified this form of interaction).

Companies who have invested in 3D simulation systems will likely have used available or proprietary gaming engines, which are largely device independent, making the transition to the Apple Vision Pro less costly and more resilient to obsolescence.

At EGGS Design, Part of Sopra Steria, we have a dedicated mixed reality team which has been solving similar problems for customers, for the past 8 years. We are excited with the new possibilities unlocked by devices like the Apple Vision Pro, which is only the first of a new generation. If you have a work-related problem where this technology could be useful, we would love to hear from you!




Creative Leader for digital design @ EGGS Design in Oslo. Works with the cross-over between technology and design, for the purpose of helping humans.