AI

Advice Instead of Just Complaining

Thus far in this blog's reboot, I have spent a good bit of time asserting that faculty need to change, adapt, and grow. But I have provided few examples and less advice on how we might do that.

In this post, I'd like to provide a place to start.

I'm an English Professor so my practical advice begins there. (Here?)

For many years, I have asked students to submit papers that required research. I spent time expansion how, where, and why to conduct that research.

With the arrival of Large Language Models like ChatGPT, I have begun to recognize I forgot to tell students something important about their research.

Here is what I am now telling them:

Whatever research they do, I can replicate. If I wanted to learn about their topics, I could go to the library and read the articles they have found. I can easily access what other scholars or journalists, or other experts have already said.

What I cannot do is find what they think about the topic.

That is what they bring to their assignments that ChatGPT never can.

And that is what I value most.

The rules of grammar and the skill of writing still matter, of course, but in an age when machines learn and compose, they need to be reminded of the central value of their voice and viewpoint -- even if they are imperfectly, partially formed.

But research should be there to support their thoughts — not replace them.

And my job is to help them better learn to express their own thoughts — not merely parrot the thoughts of others.

The challenge for all of us is that we cannot always be interested and sometimes we have to be more prescriptive so that students learn the skills they need to express themselves.. We get tired and overwhelmed. We can’t be there 100% of the time and we are often asked to care for more students than we should by administrators whose job it is to focus on the numbers.

But our students have to learn that we try.

The Sky is Still Falling (Long Term)

Before returning to some of the technical and pedagogical issues involved with AI in the classroom, it is worth understanding some of the personal and personnel aspects of all this. Without understanding these concerns, a full appreciation of the existential threat AI presents to the academy in general and the professoriate in particular can get lost in the shuffle while people focus on academic dishonesty and the comedy that can ensue when ChatGPT gets something wrong.

A few data points:

It has not been long since a student at Concordia University in Montreal discovered the professor teaching his online Art History class had been dead for two years.

Not only are Deep Fakes trivially easy to create, 3D capture tools are making it easy for anyone to make full-body models of subjects.

You can now synthesize a copy of your own voice on a cell phone.

We can digitally clone ourselves.

You can guess where this is going.

Many years ago (2014, for those recording box scores), I told a group of faculty that the development of good online teaching carried with it an inherent risk -- the risk of all of us becoming TAs to rock star teachers. When I explained this, I told my audience that, while I considered myself a good teacher, I had (as a chair) observed and (as a student) learned from great teachers.

I asked then and sometimes ask myself now: What benefit could I, and JCSU, offer to students signing up for my class that outweighed the benefit of taking an online class with that kind of academic rock star?

I still don't feel I have a compelling answer for that question.

Now, in addition to competing with the rock stars of the academy, there is a new threat.  it is now simple enough to create an avatar -- perhaps one of a beloved professor or revered figure (say, Albert Einstein or Walter Cronkite) and link it to a version of ChatGPT or Google Bard that has been taught by a master teacher how to lead a class — a scenario discussed in a recent Future Trends Forum on “Ethics, AI, and the Academy".

How long until an Arizona State reveals a plan for working it into their Study Hall offering?

AI may not be ready for prime time because it can still get things wrong.

But, then again, so do I.

The pieces necessary to do that kind of thing have been lying around since 2011. Now, even the slow-moving academy is beginning to pivot in that direction.

Chat GPT: Fear and Loathing

I wanted to spend some time thinking through the fear and loathing ChatGPT generates in the academy and what lies behind it. As such, this post is less a well written essay than it is a cocktail party of ideas and observations waiting for a thesis statement to arrive.

Rational Concerns

While I have already mentioned (and will mention again below), the academy tends to be a conservative place. We change slowly because the approach we take has worked for a long time.

A very long time.

When I say a long time, some of the works by Aristotle that are studied in Philosophy classes are his lecture notes. I would also note we insist on dressing like it is winter in Europe several hundred years ago — even when commencement is taking place in the summer heat of the American south.

While faculty have complained about prior technological advances (as well as how hot it gets in our robes), large language models are different. Prior advances -- say, the calculator/abacus or spell check -- have focused on automating mechanical parts of a process. While spell check can tell you how to spell something, you have to be able to approximate the word you want for the machine to be able to help you.

ChatGPT not only spells the words. it can provide them

In brief, it threatens to do the thinking portion for its user.

Now, in truth, it is not doing the thinking. It is replicating prior thought by predicting the next word based on what people have written in the past. This threatens to replace the hard part of writing -- the generation of original thought -- with its simulacrum.

Thinking is hard. It's tiring.it requires practice.

Writing is one of the places where it can be practiced.

The disturbing thing about pointing out that this generation of simulacra by students, however, is that too many of our assignments ask students to do exactly that. Take, for example, an English professor who gives their students a list of five research topics to choose from.

Whatever the pedagogical advantages and benefits of such an approach, it is difficult to argue that such an assignment is not asking the student to create such a simulacrum of what they think their professor wants rather than asking them to generate their own thoughts on a topic that they are passionate about.

It is an uncomfortable question to have to answer: How is what I am asking of the students truly beneficial and what is the "value add" that the students receive from completing it instead of asking ChatGPT to complete it?

Irrational Concerns

As I have written about elsewhere, faculty will complain about anything that changes their classroom. The massive adjustments the COVID-19 pandemic forced on the academy produced much walling and gnashing of teeth as we were dragged from the 18th Century into the 21st. Many considered retirement rather than having to learn and adjust.

Likewise, the story of the professor who comes to class with lecture notes, discolored by age and unupdated since their creation, is too grounded in reality to be ignored here. (Full disclosure: I know I have canned responses, too. For each generation of students, the questions are new -- no matter how many times I have answered them before.)

Many of us simply do not wish to change.

Practical Concerns

Learning how to use ChatGPT, and thinking through its implications takes time and resources. Faculty Development (training, for those in other fields -- although it is a little more involved than just training) is often focused in other areas -- the research that advances our reputations, rank, and career.

Asking faculty to divert their attention to ChatGPT when they have an article to finish is a tough sell. It is potentially a counter-productive activity depending on where in your career you are.

Why Start Up Again?

One of the things that those of us who teach writing will routinely tell students, administrators, grant-making organizations, and anyone else foolish enough to accidentally ask our thoughts on the matter, is that writing is a kind of thinking.

The process of transmitting thoughts via the written word obligates a writer to recast vague thoughts into something more concrete. And the act of doing so requires us to test those thoughts and fill in the mental gaps for the sake of their reader, who cannot follow the hidden paths thoughts will follow.

I am not sure about all of you, dear readers (at least I hope there is more than one of you), but I am in need of clearer, more detailed thought about technology these days.

Educators have been complaining about how technology makes our students more impoverished learners at least since Plato told the story of how the god Thoth's invention of writing would destroy memory.

In between the arrival of Large Language Model-based Artificial Intelligence and the imminent arrival of Augmented and Virtual Reality in the form of Apple Vision Pro, the volume of concern and complaint is once more on the rise.

I also have my concerns, of course. But I am also excited for the potential these technologies offer to assist students in ways that were once impossible.

For example, ask chat GPT to explain something to you. It will try to do so but, invariably, it will be pulling from sources that assume specialized knowledge — the same specialized knowledge that makes it difficult for students to comprehend a difficult concept.

But after this explanation is given, you can enter a prompt that begins “Explain this to a…”

Fill in that blank with some aspect of your persona. A Biology major. A football player. A theater goer. A jazz aficionado.

You can even fill in types of animals, famous figures — real or fictional (I am fond of using Kermit the Frog), or other odd entities (like Martians).

In short, ChatGPT will personalize an explanation for every difficult concept for every student.

AI and AR/VR/Spatial Computing are easy to dismiss as gimmicks, toys, and/or primarily sources of problems that committees need to address in formal institutional policies.

I am already trying to teach my students how to use ChatGPT to their benefit. There are a lot of digressions about ethics and the dangers of misuse.

But everyone agrees that these technologies will change our future. And as an English Professor, it is my job to try and prepare my students for that future as best I can.

To do that, I think I will need this space to think out loud.