Big Ideas Tech User Experience

Apple Vision Pro: UX Yay or Nay?

By Andrés Zapata \ March 11, 2024

A new immersive spatial computing platform showing great potential to improve how we work, collaborate, communicate, play, and consume media. Most fundamentally, however, it offers potential productivity boosts to how we work. But “potential” is the operative word with this disruptive entrant. Let’s get excited and be patient as it matures.

Every year, my students invariably ask me halfway through the semester if the heavy-headed UX design theory, psychology, neuroscience, anthropology, and design I peddle will be relevant when they graduate.

My answer has been as consistent in the 20+ years teaching as inertia:

As long as input and output largely remain the same, the underpinnings of UX will largely remain the same.

Since the major push for commercial desktop computing in the late ’70s with the debut of the TRS-80, Apple II, and Commodore Personal Electronic Transactor, the actual human-computer interface has been governed by keyboard and mouse (input) and screen (output).

Yes, it’s true that touch, gestures, mobile, and speech have nudged how we practice UX, but not enough to reinvent the UX curriculum or how we practice.

Enter Apple Vision Pro

Apple’s latest product, the Apple Vision Pro, has the potential to absolutely disrupt how we teach and practice UX. It challenges the traditional way people interface with a machine.

In simplest form, the device’s immersive audio and visual output dramatically changes the output.

And input is largely done by combining eye tracking (akin to mouse pointer), voice, and open-air gestures (akin to clicking and typing).

The Potential

Apple Vision Pro has the potential to uproot how we work, shop, and play online. Using the device is in fact an immersive experience Apple is calling “spatial computing.” It offers a trove of useful, neat, and productive applications of augmented and virtual reality to collaborate, communicate, produce, and entertain.

The information consumption and interaction experience through the Apple Vision Pro is different enough for UXers to start thinking about how we practice in order to produce websites and apps that work as well on it as they do on laptops, desktops, consoles, tablets, mobile devices, and TVs.

The good news is that we have a little bit of time. Apple Vision Pro released just a few days ago and it has a very long way to go before it reaches critical mass — which is when we’ll need to roll out any updates to our UX practice to accommodate the user experience for people consuming our products through the device.

The Snag(s)

As with any innovative product (many can justifiably argue that the military and Meta’s Oculus make Apple’s Vision Pro nothing more than a “me too” play let us not take for granted magical Apple’s ability to popularize technology deep into the cultural zeitgeist as it has time and time again with other technologies it didn’t invent such as personal and touch computing) the Apple Vision Pro has a few issues.

The first and potentially most difficult concern to overcome is how it looks on you. Despite the gorgeous and thoughtful packaging Apple is known for, I felt nothing short of a dork wearing the device at home. I can’t get myself to put it on at the office after my kids poked fun. They said I looked like “Scuba Steve.” To be fair, there is no “cool” looking VR or AR headset in the market as of right now.

There are a few other snags with the Vision Pro physical UX beyond it being tethered, heavy, and hot. For example, the image and video capture button is placed where you’d grab the device to adjust it on your face. I’ve accidentally pressed the button a number of times when it tells me to lift the goggles on my face for better performance (I think it does this when it can’t track eyes as expected).

The first time, I felt silly for pressing it. But after 8 or 9 times of pressing the button I shifted the blame away from myself and onto the button placement.

The insult to injury here is that once you enter the screen capture mode it is very difficult to get out of that interface and back to what you were working on (this is an issue with the device’s software UX).

But like most things, people will figure it out sooner or later, as I did. The problem then was that the environment I had worked to curate and configure was whipped out when I entered the image capture screens. Reconfiguring everything back to how it was before the slip is easy enough but definitely frustrating — and temporary because I have and will continue to accidentally press that darn button.

I’ve considered hacking a bumper on the button so it can’t be pressed. I haven’t looked hard enough yet, either, but I bet there is a way to reprogram the button to do nothing when it is pressed. And if there isn’t a way, please let this be a suggestion for the next OS update?

Yay or Nay?

In so far a few days of experimenting, I would say the Apple Vision Pro is a Yay. Or Yay-ish to be most fair. It shows great potential to improve how we use computers and consume media. Its immersive, input/output methods challenge conventions (and largely for the better), and it has the potential to introduce considerable productivity boosts to how we work.

For example, I made my living room a six-giant-screen workstation where I physically walked up to interface and interrogate screens. I put the distraction screens behind me (email, slack, chat) and the work-work screens in front so that I can switch from screen to screen with a glance and type (external keyboard and mouse because they are still faster and more precise than the virtual keyboard and “mouse” offered through the device itself).

What’s Next?

I have lots and lots of questions about the device. For example, what if you only have one arm? No fingers? What if you have vision impairments? Can it be used to actually design and how? And most specifically to my calling, how specifically does the Apple Vision Pro affect how we practice UX? Regardless of the questions, it’s too early to tell where this technology is heading, but I know I’ll have some opinions and will be paying close attention to its eventual adoption.

PS: this article was written using the Apple Vision Pro device, FWIW.

Andrés Zapata
Founder
Andrés Zapata
Founder

Andrés isn’t like most founders. He’s responsible for the operations and direction of idfive, but he’s also the door-always-open, huevos-rancheros-making leader who’ll help you when the wifi isn’t working. A lifetime learner and multifaceted professional, Andrés has nearly 30 years of experience leading projects for clients in various industries. He believes in the power of research and data to create something beautiful that can do something good.