Vision Pro

My Response to an In-Store Demo of the Vision Pro

I expected to be impressed by the demo of Apple's Vision Pro when I went to the Apple Store. I was more impressed than I expected to be.

My reaction, in hindsight, parallels the subtle sense of awe and excitement the Apple Store employees who greeted me all had about the product. They knew what I was about to experience — an experience akin to the before and after response to climbing Kilimanjaro that Michael Crichton describes in his memoir Travels. It’s an experience (rather than a bit of knowledge) you can only understand and convey after the fact.

Apple has maximized its chances at giving users that experience by making the demo an experience. I don't say this to minimize what they have done to provide this — from the scheduled appointment to the greeters to the employees who lead you through the demo to the runners wearing rubber gloves who bring out the VisionPro that has been configured for you. Not only is it Apple’s job to make this a compelling experience (They have products to sell.) and not only does Apple have a reputation to maintain, Apple has a new computing concept to explain to a large public — much larger than the public they introduced the Mac to — that will only be able to "get it" when they experience it.

Make no mistake: Spatial computing is not a gimmick. It has promise and potential that is apparent in a version 1.0 demo that leans into the wonder and magic and pushes the potential for "productivity" and/or creativity into the background.

Let me offer one potential example that points towards this. And keep in mind what Apple's executives have been telling us. They have been working on this device and its attendant experience for some time.  Few would have guessed when they gave us Desk View, for example, that we were looking at a preview of the way the Vision Pro might track our hands.

The first example is the roll out of Freeform, which may now become a collaboration space with an even more infinite canvas. It was interesting when it came to the Apple ecosystem. Now, Freeform has a new dimension — one that will let users simultaneously interact with the whiteboard-equivalent in front of them and/or on the device (Mac, iPad, or iPhone) at hand while collaborating with others remotely. Unfortunately, as someone who does not have a Vision Pro, this is something beyond my ability to test. My guess, however, is that the ability to collaborate with yourself and others via iCloud will initially appear awkward to those of us who are not used to Wacom tablets. Nevertheless, it will permit a powerful level of collaboration.

I suspect the same will be true, albeit to a more limited extent, for Notes and the iWorks suite. Here, I am interested to see if the Notes collaboration feature for iPadOS 17 (as seen about 41 minutes into the WWDC keynote) will be part of Apple’s roadmap for this.

That leads me to two opposing truths — truths that will be challenging for organizations to reconcile.

  • Truth #1: I suspect this is a platform people should be experimenting with and those that do not run the risk of giving others a head start at understanding this emerging future.

  • Truth #2: I am not sure I could persuasively justify the cost to the powers that be via a normal purchasing process — especially since it is so attached to a single user rather than being the kind of device you could pass around.

I think it will take, if you can forgive the word play, a certain kind of vision on the part of leaders to understand the need for someone to try and wrap their head around this.

E-mail as a Killer App for Vision Pro?

I am only being partially facetious here. What I am getting ready to write about is the everyday research anyone in an office would do.*

Yesterday, I needed to compose one of those long, tedious emails (dreaded by reader and writer alike) that brought someone up to speed on the last six months of a project's activity. It was a trying experience that required searching through sent messages for what was said and archived messages to confirm dates and actions taken so that a timeline could be set down.

It was a process that, quite frankly, grew more annoying and irritating as it went on. This wasn't due to the task, which I could rationally see was needed and justifiable. It was the emotional response to the mental back and forth from one archived message to another archived message (saved in different folders) to trace multiple strands so I could paint a picture of events for someone was not a part of so that they could make a decision.

This morning, as I was reading and writing about other things, one way that irritation could be mitigated occurred to me: Having some of the reference material in a different and accessible space.

To be clear, I am not talking about a multi-window arrangement on a single monitor. I was using multiple windows at various times in the drafting process. I also had more than one screen in front of me. Specifically, I had an iPad Pro in front of me to search through email and access a web portal to the third-party platform that houses the project. The drafting of the message I needed to compose was being composed with an Apple Pencil on this iPad Mini. (Before anyone new to this blog comments, I prefer to compose with the Pencil when I can because of a different kind of efficiency.)

What would have made the process easier was not the ability to open more windows on the screens I had but the ability to open prior messages in different spaces that I could arbitrarily re-arrange depending on the thread of activity I was following rather than trying to remember which tab or instance of a particular browser held the information I needed.

Having that kind of a multiple monitor set up with VESA mounts would likely be more expensive than a tricked out Vision Pro. It would also be overkill most of the time.

I've come to the conclusion that I've been thinking of the Vision Pro wrong in this regard. I think it won’t be the "killer app" that makes or breaks the device — at least not in terms of computer applications. It will be how one can apply the device to a process (e.g., project management, research projects) like the one I was engaging in that will matter.

And here is where Apple's choice to focus on AR rather than VR will prove superior to prior headsets capable of this kind of spatial computing. A user will be able to interact with physical objects (e.g., a notepad, a book, an experimental apparatus) and not be constrained by a virtual world.

I suspect this may be another instance of Apple having listened past what users say they want and creatively address the core idea. It might have been easier to develop a for a solution rather than engineering an explicitly described technical feature but it wouldn't address that deeper need.

————————

* In case any of you are wondering, that kind of activity is why your schoolteachers and university professors made you write those research papers.


Some Pointless Speculation

Apple Pencil 3 and Apple Pencil USB-C will be used with — and have been, in part, designed for — the Vision Pro.

Hear me out.

This may be an occupationally-focused thing (I’m an English Professor.), but it is easy to get used to writing on a whiteboard (or, for those old enough to remember, blackboard) at the front of the room. It is also easy to collaborate in spaces with whiteboards (or whatever term they use for the frosted glass you find in conference spaces.

Apple features video conferencing in their promotional advertisements.

Apple has an amazing app for this in Freeform, which is on the far left of the center row in the Vision Pro Home Screen images released by Apple.

Apple has a robust conversion tool for handwriting to typing in Scribble — which presents a possible solution for those concerned with how to get text into a document with the Vision Pro using something other than an awkward floating virtual keyboard. Although I will note in passing that it appears the Vision Pro is detecting surfaces. I wonder if the keyboard can flatten out on a tabletop in the way the Photos adjusts to the ceiling (Seen at about 2:15 of “A Guided Tour Apple Vision Pro”), which makes me wonder if we won’t see virtual input surfaces for a Pencil or Virtual Keyboard that snaps to a surface to aid users who want or need that.

If so, get ready for the using Windows with Apple Vision Pro jokes as people rely on glass surfaces with their Pencils.

This is, of course, wild speculation. But we have seen artists create in 3-D with other products, as Google did with Tilt Brush. I can’t imagine that Apple will not have that kind of capability available for Developers — whether it is a next generation version of Procreate or one of the 3D modeling apps.

I’d love to know if users’ Apple Pencils show up when they are linking a Magic Keyboard or Track Pad during their setup — or if a quick tap to the side makes it appear.

A Meta-Meta Analysis of the Vision Pro Analysis

I have been watching a number of tech and near-tech commentators struggling with how to react to Apple's release of the Vision Pro. People who were initially enthralled are now wondering about the less than trivial cost of the device while others, who had not given it a second thought until now, are looking at it and recognizing ways that it might work for them in tones that are far from FOMO. (This reaction is a good example of the kind of thing I am talking about.)

I think that the primary reason for the unusually unstable nature of these reactions this cycle is Monre than understandable. These creators, commentators, and journalists are in an unenviable position. Their job is to provide viewers, readers, and listeners with the information they need to understand what the Vision Pro really is and if it is worth paying attention to.

And they can't.

This isn't a failing on their part. The usual product release playbook doesn't apply here due to the nature of the product. The usual release event for a new piece of tech involves showing the audience (in the hall and via a video feed) the way a new product looks and how it works. Yes, the devices are blown up on the screen but what you see there approximates what you will see when the device is in front of you.

But that won't work with the Vision Pro. The look of the device — the easiest thing to show — is the least interesting part of the story. The flattened depictions of what a user sees while using a Vision Pro may give you an idea of what to expect but, based on first hand accounts, doesn’t  capture the experience of the 3D and wrap around images.

As a result, a standard keynote presentation won't work. Neither, I suspect, would the normal kind of review because of the challenge inherent in getting the experience across. It’s going to require the kind of creative leap Serenity Caldwell took in her animated review of the 2018 iPad.* Just don’t ask me what that kind of leap loos like.

But whatever form those leaps take, they require more time than anyone has really had.

This may explain why Apple almost feels like it is bypassing the tech media and is attempting to reach consumers directly — providing an unmediated trial for those who can get to an Apple Store. It’s not because they want to sidestep the tech pundits. It's because that the Vision Pro will need to be experienced more than prior products.

* I suspect — but have no way of knowing — that she was involved in Apple's "A Guided Tour of the Vision Pro” but have nothing beyond a sense that her iPad review was cut from the same cloth as this walkthrough.

Why the Academy Needs to Think About AR/VR Right Now

The other day, I got to see Dali Alive 360 at the Dali Museum in Saint Petersburg, FL. The time in the small dome came after our time in the museum but before the virtual Salvador Dali took a selfie with us on our way out (The Dali Museum is very clearly thinking about the future -- as can be seen in its Dreams of Dali offering )

I mention this timeline mostly because I am still trying to decide if the historical structure it provided to the art would be best placed before or after visiting the art Dali created. Before provides context but after permits more space for a viewer's own interpretation to blossom before facing the tyranny of the perceived "right" answer.

I very much enjoyed Grande Experience’s interpretations of Dali's work and how they presented it. My child (a budding artist) felt inspired by the visit and was excited when she left the presentation. My wife was of two minds as she left the exhibit, unsure how to feel about motion having been added to some of Dali's static images.*

Even while I experienced it, craning my neck to try to take it all in, I could see its potential for something like the forthcoming Vision Pro and the immersive experiences it could offer for learning. Days later,I'm still trying to work out what new medium it is bringing into being — what artistic and pedagogical language it will speak in.

Whatever its native tongue might be, I can sense the academy is not ready for it.

The last thing any of the academics reading this want to hear at this point in the semester (whatever point in the semester it happens to be when this reaches them) is that there is another technological innovation that they should be thinking about — in this case Augmented and Virtual Reality (AR/VR). This is especially true given that they are likely still trying to wrap your head around Large Language Models like ChatGPT — which, as they may or may not know, is being baked into Microsoft's and Google's office suites.

That, I can hear them say, is enough to be getting on with right now, given the "other duties as assigned" that they are perpetually being asked to take on by administrators, politicians, and pundits.

But now is precisely the time to be thinking about it because the harbingers of its arrival are here.

Tech pundits are currently talking about the frustrating, deal breaking limitations inherent in the Occulus headset and unreleased Vision Pro (which, simultaneously, is being predicted to fail while Apple sells every one they can make). They are being slightly more charitable towards devices like the technologically less ambitious but more financially accessible XReal Air glasses.

That these headsets are being actively considered and discussed does not make them the equivalent of the first iPhone. These devices are the equivalent of the old late ‘80s-era bag phones — devices that, when adjusted for inflation, come in at close to the price point of the Apple Vision Pro.

Here's why that comparison matters.

There is roughly twenty years between the bag phone and the iPhone.

Moore's Law, which has begun to come under pressure, says that the number of transistors in a chip will double every two years. If instead of focusing on the number of transistors we focus on the development period of the device to approximate the amount of time necessary to bring a mass market edition of something like the Vision Pro, that means we have a decade before these will be as ubiquitous in our classrooms as cell phones are now.

Ten years translates into two generations of students (Freshmen to Senior year on the four to six year plan) to determine how we, as faculty staff, and administrators, should use these tools and how we should prepare students for a world where AR and VR are a part of their daily lives.  How should all of us use these tools and how should we live a life where different kinds of realities and experiences will begin to blur into one another.

Two academic generations is frighteningly close to the time it takes to research, develop, propose, approve, and launch, and begin to assess the effectiveness of a radically new program.

The questions associated with these virtual spaces will require consideration — the kind of consideration that may require institutions to shore up and rebuild philosophy programs. Who owns the space inhabited by a virtual overlay? When Pokémon Go was getting the world walking while everyone tried to catch them all, some initial commercialization began. What happens when it's your home or university? Will this virtual dimension be the "property" of the landowner or will there be a digital land rush for the equivalent of AR/VR mineral rights?

What will it mean to have a virtual experience? Will it be yours? How much of the concert goer's experience will be real (or more than real) when they are "riding" on a drone above the stage and hearing the direct (and carefully managed) output from the sound boards rather than the speakers? Will they be able to say they have had a shared experience with those who were physically there?

There have been arguments about mediated experiences at the appearance of every new technology dating back to Plato's Cave. But without those old philosophical frameworks, we will have difficulty understanding our (and our societies') responses to these new levels of reality.

The biggest reason we should begin to think about this now is all around us. Consider the way we in the academy are collectively flailing about to adjust to the new normal of Large Language Models, machine learning, and AI. That did not come out of thin air. Every one of our cell phones had been offering predictive text for some time. It became a mad libs game on social media (“Complete the following using the left word suggested by your phone!”). All the signs were there but we did not begin to engage them on a large enough scale to be ready to adjust when ChatGPT 3.0 was released.

We should think through AR/VR while there is still time to think instead of react once that horse has left the barn.

As a first step, we need to think and engage and start to explicitly teach our students about these things and the impact they will have on their lives. From there, we can start to build new frameworks for navigating the Brave New World that is coming into being all around us rather than responding after it has finished slouching into Bethlehem to be born.

—————

* Dali's work, as it hangs on the wall, is static. And yet, he wanted to incorporate motion and always embraced new things. And, of course, he — like the artists and interpreters of Grande Experiences — incorporated works of prior artists into his own work to express or do something new.