Pedagogy

Why the Academy Needs to Think About AR/VR Right Now

The other day, I got to see Dali Alive 360 at the Dali Museum in Saint Petersburg, FL. The time in the small dome came after our time in the museum but before the virtual Salvador Dali took a selfie with us on our way out (The Dali Museum is very clearly thinking about the future -- as can be seen in its Dreams of Dali offering )

I mention this timeline mostly because I am still trying to decide if the historical structure it provided to the art would be best placed before or after visiting the art Dali created. Before provides context but after permits more space for a viewer's own interpretation to blossom before facing the tyranny of the perceived "right" answer.

I very much enjoyed Grande Experience’s interpretations of Dali's work and how they presented it. My child (a budding artist) felt inspired by the visit and was excited when she left the presentation. My wife was of two minds as she left the exhibit, unsure how to feel about motion having been added to some of Dali's static images.*

Even while I experienced it, craning my neck to try to take it all in, I could see its potential for something like the forthcoming Vision Pro and the immersive experiences it could offer for learning. Days later,I'm still trying to work out what new medium it is bringing into being — what artistic and pedagogical language it will speak in.

Whatever its native tongue might be, I can sense the academy is not ready for it.

The last thing any of the academics reading this want to hear at this point in the semester (whatever point in the semester it happens to be when this reaches them) is that there is another technological innovation that they should be thinking about — in this case Augmented and Virtual Reality (AR/VR). This is especially true given that they are likely still trying to wrap your head around Large Language Models like ChatGPT — which, as they may or may not know, is being baked into Microsoft's and Google's office suites.

That, I can hear them say, is enough to be getting on with right now, given the "other duties as assigned" that they are perpetually being asked to take on by administrators, politicians, and pundits.

But now is precisely the time to be thinking about it because the harbingers of its arrival are here.

Tech pundits are currently talking about the frustrating, deal breaking limitations inherent in the Occulus headset and unreleased Vision Pro (which, simultaneously, is being predicted to fail while Apple sells every one they can make). They are being slightly more charitable towards devices like the technologically less ambitious but more financially accessible XReal Air glasses.

That these headsets are being actively considered and discussed does not make them the equivalent of the first iPhone. These devices are the equivalent of the old late ‘80s-era bag phones — devices that, when adjusted for inflation, come in at close to the price point of the Apple Vision Pro.

Here's why that comparison matters.

There is roughly twenty years between the bag phone and the iPhone.

Moore's Law, which has begun to come under pressure, says that the number of transistors in a chip will double every two years. If instead of focusing on the number of transistors we focus on the development period of the device to approximate the amount of time necessary to bring a mass market edition of something like the Vision Pro, that means we have a decade before these will be as ubiquitous in our classrooms as cell phones are now.

Ten years translates into two generations of students (Freshmen to Senior year on the four to six year plan) to determine how we, as faculty staff, and administrators, should use these tools and how we should prepare students for a world where AR and VR are a part of their daily lives.  How should all of us use these tools and how should we live a life where different kinds of realities and experiences will begin to blur into one another.

Two academic generations is frighteningly close to the time it takes to research, develop, propose, approve, and launch, and begin to assess the effectiveness of a radically new program.

The questions associated with these virtual spaces will require consideration — the kind of consideration that may require institutions to shore up and rebuild philosophy programs. Who owns the space inhabited by a virtual overlay? When Pokémon Go was getting the world walking while everyone tried to catch them all, some initial commercialization began. What happens when it's your home or university? Will this virtual dimension be the "property" of the landowner or will there be a digital land rush for the equivalent of AR/VR mineral rights?

What will it mean to have a virtual experience? Will it be yours? How much of the concert goer's experience will be real (or more than real) when they are "riding" on a drone above the stage and hearing the direct (and carefully managed) output from the sound boards rather than the speakers? Will they be able to say they have had a shared experience with those who were physically there?

There have been arguments about mediated experiences at the appearance of every new technology dating back to Plato's Cave. But without those old philosophical frameworks, we will have difficulty understanding our (and our societies') responses to these new levels of reality.

The biggest reason we should begin to think about this now is all around us. Consider the way we in the academy are collectively flailing about to adjust to the new normal of Large Language Models, machine learning, and AI. That did not come out of thin air. Every one of our cell phones had been offering predictive text for some time. It became a mad libs game on social media (“Complete the following using the left word suggested by your phone!”). All the signs were there but we did not begin to engage them on a large enough scale to be ready to adjust when ChatGPT 3.0 was released.

We should think through AR/VR while there is still time to think instead of react once that horse has left the barn.

As a first step, we need to think and engage and start to explicitly teach our students about these things and the impact they will have on their lives. From there, we can start to build new frameworks for navigating the Brave New World that is coming into being all around us rather than responding after it has finished slouching into Bethlehem to be born.

—————

* Dali's work, as it hangs on the wall, is static. And yet, he wanted to incorporate motion and always embraced new things. And, of course, he — like the artists and interpreters of Grande Experiences — incorporated works of prior artists into his own work to express or do something new.

Chat GPT: Fear and Loathing

I wanted to spend some time thinking through the fear and loathing ChatGPT generates in the academy and what lies behind it. As such, this post is less a well written essay than it is a cocktail party of ideas and observations waiting for a thesis statement to arrive.

Rational Concerns

While I have already mentioned (and will mention again below), the academy tends to be a conservative place. We change slowly because the approach we take has worked for a long time.

A very long time.

When I say a long time, some of the works by Aristotle that are studied in Philosophy classes are his lecture notes. I would also note we insist on dressing like it is winter in Europe several hundred years ago — even when commencement is taking place in the summer heat of the American south.

While faculty have complained about prior technological advances (as well as how hot it gets in our robes), large language models are different. Prior advances -- say, the calculator/abacus or spell check -- have focused on automating mechanical parts of a process. While spell check can tell you how to spell something, you have to be able to approximate the word you want for the machine to be able to help you.

ChatGPT not only spells the words. it can provide them

In brief, it threatens to do the thinking portion for its user.

Now, in truth, it is not doing the thinking. It is replicating prior thought by predicting the next word based on what people have written in the past. This threatens to replace the hard part of writing -- the generation of original thought -- with its simulacrum.

Thinking is hard. It's tiring.it requires practice.

Writing is one of the places where it can be practiced.

The disturbing thing about pointing out that this generation of simulacra by students, however, is that too many of our assignments ask students to do exactly that. Take, for example, an English professor who gives their students a list of five research topics to choose from.

Whatever the pedagogical advantages and benefits of such an approach, it is difficult to argue that such an assignment is not asking the student to create such a simulacrum of what they think their professor wants rather than asking them to generate their own thoughts on a topic that they are passionate about.

It is an uncomfortable question to have to answer: How is what I am asking of the students truly beneficial and what is the "value add" that the students receive from completing it instead of asking ChatGPT to complete it?

Irrational Concerns

As I have written about elsewhere, faculty will complain about anything that changes their classroom. The massive adjustments the COVID-19 pandemic forced on the academy produced much walling and gnashing of teeth as we were dragged from the 18th Century into the 21st. Many considered retirement rather than having to learn and adjust.

Likewise, the story of the professor who comes to class with lecture notes, discolored by age and unupdated since their creation, is too grounded in reality to be ignored here. (Full disclosure: I know I have canned responses, too. For each generation of students, the questions are new -- no matter how many times I have answered them before.)

Many of us simply do not wish to change.

Practical Concerns

Learning how to use ChatGPT, and thinking through its implications takes time and resources. Faculty Development (training, for those in other fields -- although it is a little more involved than just training) is often focused in other areas -- the research that advances our reputations, rank, and career.

Asking faculty to divert their attention to ChatGPT when they have an article to finish is a tough sell. It is potentially a counter-productive activity depending on where in your career you are.

Why Start Up Again?

One of the things that those of us who teach writing will routinely tell students, administrators, grant-making organizations, and anyone else foolish enough to accidentally ask our thoughts on the matter, is that writing is a kind of thinking.

The process of transmitting thoughts via the written word obligates a writer to recast vague thoughts into something more concrete. And the act of doing so requires us to test those thoughts and fill in the mental gaps for the sake of their reader, who cannot follow the hidden paths thoughts will follow.

I am not sure about all of you, dear readers (at least I hope there is more than one of you), but I am in need of clearer, more detailed thought about technology these days.

Educators have been complaining about how technology makes our students more impoverished learners at least since Plato told the story of how the god Thoth's invention of writing would destroy memory.

In between the arrival of Large Language Model-based Artificial Intelligence and the imminent arrival of Augmented and Virtual Reality in the form of Apple Vision Pro, the volume of concern and complaint is once more on the rise.

I also have my concerns, of course. But I am also excited for the potential these technologies offer to assist students in ways that were once impossible.

For example, ask chat GPT to explain something to you. It will try to do so but, invariably, it will be pulling from sources that assume specialized knowledge — the same specialized knowledge that makes it difficult for students to comprehend a difficult concept.

But after this explanation is given, you can enter a prompt that begins “Explain this to a…”

Fill in that blank with some aspect of your persona. A Biology major. A football player. A theater goer. A jazz aficionado.

You can even fill in types of animals, famous figures — real or fictional (I am fond of using Kermit the Frog), or other odd entities (like Martians).

In short, ChatGPT will personalize an explanation for every difficult concept for every student.

AI and AR/VR/Spatial Computing are easy to dismiss as gimmicks, toys, and/or primarily sources of problems that committees need to address in formal institutional policies.

I am already trying to teach my students how to use ChatGPT to their benefit. There are a lot of digressions about ethics and the dangers of misuse.

But everyone agrees that these technologies will change our future. And as an English Professor, it is my job to try and prepare my students for that future as best I can.

To do that, I think I will need this space to think out loud.