AI

Looking Forward to October

Unlike the professional Apple pundits, like the team at Connected — (who are in the midst of their annual fundraiser for Saint Jude's/), there is no cost or benefit associated with any predictions I make about Apple products. And, as I've admitted in the past, there's as much wishing things into existence when I do this as there is true analysis.

That said, I have some thoughts about October.

I've written before about my belief about what I think (read: want) from, the next iPad Mini. I think the recent iPhone announcement has hinted that what I want is about to arrive.

I think the A18 and A18 Pro chips that went into the latest iteration of the iPhones will make their way into the base iPad and iPad Mini so that the entire iPad line will be able to use Apple Intelligence. I am also willing to bet that the "mighty" iPad Mini will get the Pro version of the chip.

That doesn't guarantee that the Mini will get external monitor support but I suspect that is coming, too.

The move to the Apple Pencil Pro (with the required shift in camera placement) also makes sense if Apple wants to fully shift to a two Pencil future ( Pencil — currently called USB-C — and Pro).

I wouldn't be surprised if the event's name is “Mighty”, given the rumors of a redesigned Mac Mini. If the HomePod Mini is given some stage time with an Apple Intelligence upgrade to highlight how Siri and Home are moving into the future, it would produce a neat narrative while showing off the prowess of Apple's engineers and what kind of technology they can fit into a small space.

And it I'm wrong? Odds are you'll have forgotten I wrote this by then.

The Best Prepared Faculty to Teach AI Skills Are Already on Your Campus

One of the questions l've seen and heard explicitly and implicitly asked of late is who is going to teach the general undergraduate student population how to use AI. Given the recent Cengage Group report that the majority of recent graduates wish they had been trained on how to use Generative AI, this is a skill colleges and universities will want to incorporate into the curriculum.

Remember: We’re looking at a general student population — not future coders. The world's departments of Computer Science are already working that problem and trying to grapple with the problem that their colleagues have created algorithms that can do much of what they are teaching their students to do.

Much, but not all.

So here’s what we need our students to learn: They need to learn how to consider a problem deeply and think through its issues. Then, they need to take what they have considered and use it to frame a prompt that consists of a well defined request that is accompanied by specific restraints that will instruct the Large Language Model how to respond.

This is what every research methods course — whether specific to a major or embedded in the Freshman Composition sequence — tries to teach its students to do.

We are not looking at a significant search for personnel or long-term re-training of those already there. They already have the skills.

They need help reimagining them.

To facilitate this re-imagination, faculty in these areas need is some basic support and training on how to incorporate Generative AI tools and tasks into their curriculum so they can move past the plagiarism question and begin to see this as an opportunity to finally get students to understand why they have to take their Composition of Methods class.

Administrators will have to figure out how to put the tools in their hands, provide them with the training they need, and how to better reward them for imparting the high tech, business-ready skills that the AI revolution is demonstrating that they provide.

Apple Intelligence — Hiding in Plain Sight

I haven't installed the iOS or iPadOS 18 beta software. This will come as a surprise to no one. After all, I'm not a developer. I'm not a reviewer. I'm not a Podcaster, YouTuber, or similar Creative who needs to generate content on a regularly scheduled basis.

But I am interested. So, I read, listen, and watch the material being created by reviewers, podcasters, and YouTubers.

Given the public interest in AI, I can understand why these creators keep their focus on Apple Intelligence and whether any signs of it have appeared in the betas. It's their job to let us know if it has or hasn't.

What I am trying to think through is what to make of what often follows in these beta reports: Updates on what new machine learning features have arrived.

While these features are not part of what has been branded as Apple Intelligence. But they are features that draw on artificial intelligence.

I bring this up not to try and shame the content creators struggling to keep up with a fast changing story. By and large, they are doing good work. Rather, I want to highlight how the most significant features and changes AI will bring may be invisible to users.

For those of us trying to make sense of a future that includes generative AI, LLMs, and other machine learning advances, trying to capture these changes clearly for our audiences while various corporations and scholarly communities introduce language that segments the field is no simple task. Nor is it an insignificant one. Trying to explain to colleagues that they should be attentive to these developments involves understanding a continuum of technology — one that spell check and predictive text already has them on — obligates them to grapple with the fuzzy lines branding draws.

I'd love to conclude with a neat and tidy solution as to how to make it clear and comprehensible that Scribble (which I am using to write this post), text smoothing (available in the iPadOS 18 betas), and Apple Intelligence are connected yet distinct. If I could do that, I would be more able to tease out how and where Generative AI could be best employed land best not employed) in the brainstorming, organizing, outlining, drafting, editing, proofreading, publishing continuum of the writing process as a tool for creation and learning.

Creating and learning to create are two very different things. And I absolutely believe that going back to Blue Books is not the answer. Don't laugh. I know colleagues who have been advocating for that for well over a decade. Several years ago, it was because BlueBooks keep students from accessing the internet while they write, making our assessment results more neat even if they are likely to never exist in a world where they can’t access the internet. Now, of course, it’s to make sure they don’t hand in generated text.

But if we aren't going back to Blue Books, and we want to keep a general public informed about what AI can (and can’t) do, we have to figure out how to make the differences between the expressions of machine learning more approachable.

AI and the Would-Be Author*

At the recent AI for Everyone Summit, one of the things I was asked to think about is the "write my book" tools** that have begun to make an appearance. The goal behind these tools, to offer one 'using AI correctly and ethically’ use case, is to get a rough but clean first draft done by having the AI respond to a detailed, complex prompt (We’re talking about a paragraph or two of instructions with additional material potentially uploaded to feed the LLM with more data to work from.). From there, the would-be author would proceed to the process of rewriting what had been generated to check for GenAI hallucinations and places where the would-be author needs to expand on the generated text to better capture their idea.

These tools, then, can serve as ghost writers for those who question whether they have the time, inclination, dedication, or skill to produce a vehicle*** for their idea. The complex prompt and the editing of the generated text is where the thinking part of writing takes place.

“Their idea” is where I can sense a kind of value here. If you scroll long and far enough on LinkedIn, you are almost certain to come across a post that reminds you that ideas don't have value because only the products of ideas have value. I'm sure that you, like me, can think back over the many times you have offered others (or had offered to you) good ideas that would have benefitted them only to have them not taken up and watched while others, with a similar idea, benefitted when they acted.

It's common enough to be a trope — often framed as an "I told you so" being delivered to a bungling husband by a long-suffering sit-com wife.

And if all you are looking for is to get your idea out into the world in a publication, it's hard to argue with using these tools — especially for those in the academy whose annual assessments and tenure and promotion are tied to the number of publications appearing on their c.v.

But the transfer of information from one person to another is only one of writing's purposes. Like any medium of communication, part of writing is engaging the reader and keeping them interested in whatever it is a would-be author is writing about.

During a recent livestream of 58 Keys, I asked William Gallagher for his thoughts on the GenAI tools that are appearing and if he intended to use any of them, given what they can do to an author's voice. In brief, he replied that he could see the utility of a more advanced grammar checking tool but balked at autogenerated text — including autogenerated email.

He pointed out how we, as writers, were advertising our skills with every email (joking that the advanced grammar check may result in false advertising). And he highlighted the response of another participant, who wrote in the chat "If I'm not interested in writing the message, why should I expect someone to be interested in reading it?"

That question, I think, gives hope for would-be authors, gives important guidance for those considering generated text tools, and should give pause to those who believe they can outsource writing to AI.

Using AI to write a message to a team and find a list of possible times to meet is the kind of message an AI can and should write — assuming everyone's AI Agent has access to accurate data. Asking an AI to pitch an idea or propose a solution is more risky because it doesn't pass the "why should I expect them to read it" test.

Rather than de-valuing writing, this highlights the value of good writers — people who have learned the how and why of communicating in a way that creates an expectation of interest in what's being written and why it's important.

———————

* I will be using this phrase through out this post but I ask you, gentle reader, to not read it pejoratively. To call this theoretical Individual “the user" misses the mark, I think, as it focuses us too much on the tool and not enough on the intent. "Would-be" is needed, however, because our theoretical individual has not completed the task of bringing their completed work to publication. Real world users of these tools, after all, may or may not be authors of prior works.

** I haven't experimented with a specific one at the time of writing.

*** I use "vehicle" here because there are tools that generate images (still or moving), music, presentations, computer code, and likely other forms media I don't know about. This question isn't exclusive to writing.

AI, Copyright, and the Need for Governmental Action

Federico Viticci and John Voorhees of MacStories have released an open letter to EU and US lawmakers and regulators with their concerns over the way that most Large Language Models have been trained. In brief, they point out an obvious and undeniable truth: That any model that has been trained on the open web using in-copyright text is intellectual property theft.

What they have written is self-evidently true and it is time for those who can act to act.

Recent comments by Mustafa Suleyman, Microsoft’s CEO for AI demonstrate that companies, in their rush to get to market, have not even begun to think through the implications of what they are doing.

While I remain hopeful for what AI will be able to do in the future, it is clear that we have a lot of work to do in the present.

There is no question in my mind that those who have scraped copyrighted material need to either license that material or rebuild their models from the ground up based on licensed and out of copyright material.

The smartest companies will get out ahead of this. The least ethical will try to weather the storm or find a buyer and walk away before it hits.

Why I Suspect there is an Absence of Photorealism in Apple's AI-Generated Images

I've read some commentary here and there about how the images Apple Intelligence generates are insufficiently photorealistic. One member of the MacStories' Discord (apologies for the lack of credit where credit’s due — there’s been a lot of discussion there) suggested that the image quality, in this regard, might improve in time as Apple’s models ingest more and develop further.

I observed there that I suspected the choices Apple made about the kinds of images generated by its Machine Learning might be intentional. With machine learning tools that remove people and objects in the background coming to Photos, Apple is showing its models are capable of photorealistic renders.

So why might they not permit Image Playground to do the same?

I can see where some of this may be limitations dictated by on-device computational power. While I’m no engineer,  I would guess fixing/repairing a of a photograph seems like it would be asking less of a Neural Engine than creating a full photo. Even so, the use of cloud computing would be an easy enough way to get around this.

Rather, I suspect it has everything to do with user responses to the generated images and, sadly, the motivations some users might have. Transforming a prompt into an image that is obviously not real is usually more than enough to meet most users' needs. The creation of a gazebo for a Keynote, to use Apple's example, is one where a visual concept is being communicated and that communication does not suffer from a lack of photorealism.

But there are cases where people can and will suffer from the malicious deployment of photorealistic renders. Indeed, in a rare case of high profile bipartisanship, The US Congress is (Finally!) moving to criminalize deepfake revenge porn. Senators Ted Cruze (R) is working with Amy Klobuchar (D), who both were candidates for their party's nomination for the presidency, find easy common ground here.

As well they should. it has been reported that middle schoolers in Florida and California (and, I suspect, elsewhere) — a demographic seldom used in profiles of good sense and sound decision-making —have learned they can use AI to generate photorealistic nudes of their classmates.

There's a reason we find The Lord of the Flies plausible — even if the historical event turned out for better than fiction.

It's the kind of problem that even a n AI enthusiastic techbro focused on making the Star Trek computer or their own personal Jarvis should have seen coming because it always seems to happen.

It was obvious enough to come up in a meeting at Apple.

And sidestepping photorealism is an obvious solution.

Keeping their Image Playground in check in this regard makes good sense and is, I would argue, protecting its users from those who might use the its technology for malicious purposes.

Writing and AI’s Uncanny Valley, Part Three

We respond to different kinds of writing differently. We do not expect a personal touch, for example, from a corporate document — no matter how much effort a PR team puts into it. We know the brochure or email we are reading (choose your type of organization here) is not tailored for us — even when a mail merge template uses our name.

Or perhaps I should say especially when it uses our name and then follows it with impersonal text.

But a direct message from someone we know, whether personally or professionally? We expect that to be from the person writing us.

And sound like it.

This is where AI-assisted writing will begin to differentiate itself from the writing we have engaged with up to now. If it is a corporate document — the kind of document we expect to have come from a committee then pass through many hands, no one will blink if they detect AI. I suspect that this is where AI-assisted writing will flourish and be truly beneficial.

To explain why, let me offer a story from a friend who used to work in a financial services company. Back in the 80s, some of their marketing team started using a new buzz word and they wanted to incorporate it into their product offerings, explaining to the writing team that they wanted to tell their older clients how they intended to be opportunistic with the money they had given them to invest.

The writers in the room, imagining little old ladies in Florida who preferred the Merriam Webster Dictionary definitions to whatever market-speak was the flavor of the month, were horrified and asked if the marketers understood that ‘opportunistic’ meant “exploiting opportunities with little regard to principle.” The marketers, oblivious to how their audience would near that word, had been prepared to tell their company’s clients they wanted to take advantage of them.

AI is simultaneously well positioned to catch mistakes like that and assist teams in fixing it and to manufacture such mistakes, which will need to be caught by the same teams — all based on the data (read: writing) it was trained on. I stress assist here because even the best generative AI needs to have its work looked over in the same way any human writer does. And because they are statistical models of language, they are subject to making the same kinds of mistakes the marketers I wasn’t mentioning did.

AIs will also be beneficial as first-pass editors, giving writers a chance to see what grammar and other issues might need fixing. I don’t expect to see them replacing human editors any time soon, as the skill to integrate changes into a text in someone else’s voice is a challenging skill to develop.

Personal correspondence, whether it is an email or a letter, and named-author writing will be the most challenging writing to integrate with generative AI. While lower level AI work of the kind we are currently used to — spellcheck and grammar checking — will continue to be tools that writers can freely use. I haven't downloaded Apple’s iPadOS 18 beta software but will be interested to see how it performs in that regard.

The kind of changes generative AI produces runs the risk of eroding the trust of the reader, whether they can put their finger on why they are uncomfortable with what they are reading or not.

Yes, there is a place for generative AI in the future of writing. I suspect the two companies best positioned to eventually take the lead are Apple (which is focusing its efforts on device, where it can learn the voice of a specific user ) and Google (which has been ingesting its users’ writing for some time, even it it has been for other purposes). Microsoft's Office suite could be similarly leveraged in the enterprise space but I don't have the sense people turn to it for personal writing.

That may tell you more about me than the general population.

These three usual suspects and whatever startups that believe they are ready to change our world will need to learn more about how to focus large language models on individual users more broadly. Most text editors can already complete sentences in our voices. The next hurdle will be getting these tools to effectively compose longer texts.

If, in fact, that is what we decide we want.*

———————-

* Not go full 1984 on you, but I do wonder how the power to shift thought through language might be used and abused by corporations and nation states through a specifically trained LLM.

Writing and AI’s Uncanny Valley, Part Two

As I mentioned yesterday, the final papers I received this semester read oddly, with intermittent changes in the voice of the writer. In years past, this shift would be a sure sign of plagiarism. The occasional odd word suggested by a tutor or thesaurus isn't usually enough to give me the sense of this kind of shift. It's when whole sentences (or significant parts of sentences) shift that I feel compelled to do a spot search for the original source.

More often than not, this shift in voice is the result a bad paraphrase that's been inappropriately cited (e.g., it's in the bibliography but not cited in the text). More rarely, it's a copy/paste job.

With this semester’s final papers, I have begun to hear when students using AI appropriately are having sections of their paper "improved" in ways that changes their voice.

This matters (for those who are wondering) because our own voices are what we bring to a piece of writing. To lose one's voice through an AI's effort is to surrender that self-expression. There may be times when a more corporate voice is appropriate, but even the impersonal tone of a STEM paper has something of its author there.

To get a sense of how much was being lost when an AI was asked to improve a piece of writing, I took five posts from this blog and asked ChatGPT 4o and Google Gemini to improve them. I uploaded the fifteen files into Lexos, a textual evaluation tool developed at Wheaton College by Dr Michael Drout, Professor of English and Dr. Mark LeBlanc, Professor of Computer Science.

The Lexos tool is sufficiently powerful that I am certain that I’m not yet using it to its full capacity and learning to do so is quickly becoming a summer project. But the initial results from two of the tools were enough to make me expand my initial experiment by adding four additional texts and then a fifth.

The four texts were William Shakespeare's Henry V, Christopher Marlowe's Tamburlaine and Doctor Faustus, and W.B. Yeats' The Celtic Twilight — as found on Project Gutenberg. The first three were spur of the moment choices of texts distant enough from me in time as to be neutral. I added Yeats out of a kind of curiosity to see if my reading and rereading of his work had made a noticeable impact on my writing.

Spoilers: It hadn't — at least not at first blush. But the results made me seek out a control text. For this fifth choice, I added Gil Scott Heron's "The Revolution Will Not Be Televised" because of its hyper-focus on events and word choice of the late sixties and early seventies. This radical difference served as a kind of control for my experiment.

The first Lexos tool that hinted at something was the Dendogram visualization, which shows family tree-style relationships between texts. There are different methodologies (with impressive sounding names ) that Lexos can apply that produce variant arrangements based on different statistical models.

The Dendogram Groupings generated by Lexos.

These showed predictable groupings. Scott Heron was the obvious outlier, as was expected of a control text. The human composed texts by other authors clustered together, which I should have expected (although the close association between Henry V and Tamburlaine — perhaps driven by the battle scenes — was an interesting result). Likewise, the closer association between the ChatGPT rewrites and the originals came as no surprise, as Gemini had transformed the posts from paragraphs to bulleted lists.

What did come as a surprise was the results of the Similarity Query, which I as much stumbled across as sought out. Initially, I had come to Lexos looking for larger, aggregate patterns rather than looking at how a single text compared with the others.

It turned out  the Similarity Queries were the results that showed the difference between human-written text and machine generated text.

Similarity Query for Blog Post Zero. The top of the letters for “The Revolution Will Not Be Televised” can be barely seen at the bottom of the list.

Gil Scott Heron remained the outlier, as a control text should.

The ChatGPT 4o rewrite of any given post was listed as the closest text to the original, as one would expect.

What I did not expect was what came next. in order, what appeared was:

  • The non-control human texts.

  • The ChatGPT texts.

  • The Gemini texts.

The tool repeatedly marked the AI-generated text as different and like itself rather than being like a human.

Here, Lexos dispassionately quantifies what I experienced while reading those essays. The changes made by generative AI change the voice of the writer, supplanting that voice with its own.

This has serious implications for writers, editors, and those who teach them.

It also has implications for the companies that make these AI/LLM tools.

I will discuss some of that in tomorrow’s post.

Writing and AI’s Uncanny Valley, Part One

TLDR: A reader can tell when an AI rewrites your work and that shift in the text will give your readers pause in the same way generative images do, eroding their trust in what they are reading.

A more academic version of this research will be submitted for peer review but I wanted to talk about it here as well.

First, a disclaimer: I am hopeful for our AI-supported future. I recognize that it will come with major disruptions to our way of life and the knowledge industry will be impacted in the same way that manufacturing was disrupted by the introduction of robots in the 1980s.

The changes this will create won't be simple or straightforward.

Those considerations are for a different post and time.

Right now, I want us to look at where we are.

During the 2023-24 academic year, I began integrating generative AI into my classes. As I am an English professor, most of this focused on how to responsibly use large language models and similar technologies to better student writing and assist students with their research.

It was a process that moved in fits and starts, as you night imagine. As the models changed (sometimes for the better, sometimes for the worse) and advanced (as, for example, ChatGPT moved from 3.0 to 3.5 to 4o), I had to adjust what I was doing.

My biggest adjustment, however, came with the final papers and I unexpectedly found myself staring into an uncanny valley.

One of the things English professors are trained to do is hear the shifts in language as we read. The voice we hear — or at least listen for — is the unique voice of a writer. It's what makes it possible to read parts of “Ode to the West Wind” and “Ode Upon a Grecian Urn” and recognize which one is Shelley and which one is Keats.

We don't learn to do this as a parlor trick or to be ready for some kind of strange pub quiz. We learn to do this because shifts in tone and diction are used to carry meaning in the same way the tone something is spoken with carries meaning for a listener.

Listening for a writer's voice may be the single most important skill an editor develops, as it allows them to suggest changes that will improve an author's writing that will be invisible to a reader. It's what separates good editors from great ones.

For those grading papers, this skill is what lets us know when students have begun to plagiarize, whether accidentally or intentionally.

But this time, I heard a different kind of change — one that I quickly learned wasn't associated with plagiarism.

It had been created by the AI that I had told the students to use.

After grading the semester's papers, the question I was asking shifted from if a generative AI noticeably changed a writer's voice to how significantly generative AI changed a writer's voice.

Spoilers: The changes were noticeable and significant.

How I went about determining this is the subject of tomorrow's post.

Advice Instead of Just Complaining

Thus far in this blog's reboot, I have spent a good bit of time asserting that faculty need to change, adapt, and grow. But I have provided few examples and less advice on how we might do that.

In this post, I'd like to provide a place to start.

I'm an English Professor so my practical advice begins there. (Here?)

For many years, I have asked students to submit papers that required research. I spent time expansion how, where, and why to conduct that research.

With the arrival of Large Language Models like ChatGPT, I have begun to recognize I forgot to tell students something important about their research.

Here is what I am now telling them:

Whatever research they do, I can replicate. If I wanted to learn about their topics, I could go to the library and read the articles they have found. I can easily access what other scholars or journalists, or other experts have already said.

What I cannot do is find what they think about the topic.

That is what they bring to their assignments that ChatGPT never can.

And that is what I value most.

The rules of grammar and the skill of writing still matter, of course, but in an age when machines learn and compose, they need to be reminded of the central value of their voice and viewpoint -- even if they are imperfectly, partially formed.

But research should be there to support their thoughts — not replace them.

And my job is to help them better learn to express their own thoughts — not merely parrot the thoughts of others.

The challenge for all of us is that we cannot always be interested and sometimes we have to be more prescriptive so that students learn the skills they need to express themselves.. We get tired and overwhelmed. We can’t be there 100% of the time and we are often asked to care for more students than we should by administrators whose job it is to focus on the numbers.

But our students have to learn that we try.

The Sky is Still Falling (Long Term)

Before returning to some of the technical and pedagogical issues involved with AI in the classroom, it is worth understanding some of the personal and personnel aspects of all this. Without understanding these concerns, a full appreciation of the existential threat AI presents to the academy in general and the professoriate in particular can get lost in the shuffle while people focus on academic dishonesty and the comedy that can ensue when ChatGPT gets something wrong.

A few data points:

It has not been long since a student at Concordia University in Montreal discovered the professor teaching his online Art History class had been dead for two years.

Not only are Deep Fakes trivially easy to create, 3D capture tools are making it easy for anyone to make full-body models of subjects.

You can now synthesize a copy of your own voice on a cell phone.

We can digitally clone ourselves.

You can guess where this is going.

Many years ago (2014, for those recording box scores), I told a group of faculty that the development of good online teaching carried with it an inherent risk -- the risk of all of us becoming TAs to rock star teachers. When I explained this, I told my audience that, while I considered myself a good teacher, I had (as a chair) observed and (as a student) learned from great teachers.

I asked then and sometimes ask myself now: What benefit could I, and JCSU, offer to students signing up for my class that outweighed the benefit of taking an online class with that kind of academic rock star?

I still don't feel I have a compelling answer for that question.

Now, in addition to competing with the rock stars of the academy, there is a new threat.  it is now simple enough to create an avatar -- perhaps one of a beloved professor or revered figure (say, Albert Einstein or Walter Cronkite) and link it to a version of ChatGPT or Google Bard that has been taught by a master teacher how to lead a class — a scenario discussed in a recent Future Trends Forum on “Ethics, AI, and the Academy".

How long until an Arizona State reveals a plan for working it into their Study Hall offering?

AI may not be ready for prime time because it can still get things wrong.

But, then again, so do I.

The pieces necessary to do that kind of thing have been lying around since 2011. Now, even the slow-moving academy is beginning to pivot in that direction.

Chat GPT: Fear and Loathing

I wanted to spend some time thinking through the fear and loathing ChatGPT generates in the academy and what lies behind it. As such, this post is less a well written essay than it is a cocktail party of ideas and observations waiting for a thesis statement to arrive.

Rational Concerns

While I have already mentioned (and will mention again below), the academy tends to be a conservative place. We change slowly because the approach we take has worked for a long time.

A very long time.

When I say a long time, some of the works by Aristotle that are studied in Philosophy classes are his lecture notes. I would also note we insist on dressing like it is winter in Europe several hundred years ago — even when commencement is taking place in the summer heat of the American south.

While faculty have complained about prior technological advances (as well as how hot it gets in our robes), large language models are different. Prior advances -- say, the calculator/abacus or spell check -- have focused on automating mechanical parts of a process. While spell check can tell you how to spell something, you have to be able to approximate the word you want for the machine to be able to help you.

ChatGPT not only spells the words. it can provide them

In brief, it threatens to do the thinking portion for its user.

Now, in truth, it is not doing the thinking. It is replicating prior thought by predicting the next word based on what people have written in the past. This threatens to replace the hard part of writing -- the generation of original thought -- with its simulacrum.

Thinking is hard. It's tiring.it requires practice.

Writing is one of the places where it can be practiced.

The disturbing thing about pointing out that this generation of simulacra by students, however, is that too many of our assignments ask students to do exactly that. Take, for example, an English professor who gives their students a list of five research topics to choose from.

Whatever the pedagogical advantages and benefits of such an approach, it is difficult to argue that such an assignment is not asking the student to create such a simulacrum of what they think their professor wants rather than asking them to generate their own thoughts on a topic that they are passionate about.

It is an uncomfortable question to have to answer: How is what I am asking of the students truly beneficial and what is the "value add" that the students receive from completing it instead of asking ChatGPT to complete it?

Irrational Concerns

As I have written about elsewhere, faculty will complain about anything that changes their classroom. The massive adjustments the COVID-19 pandemic forced on the academy produced much walling and gnashing of teeth as we were dragged from the 18th Century into the 21st. Many considered retirement rather than having to learn and adjust.

Likewise, the story of the professor who comes to class with lecture notes, discolored by age and unupdated since their creation, is too grounded in reality to be ignored here. (Full disclosure: I know I have canned responses, too. For each generation of students, the questions are new -- no matter how many times I have answered them before.)

Many of us simply do not wish to change.

Practical Concerns

Learning how to use ChatGPT, and thinking through its implications takes time and resources. Faculty Development (training, for those in other fields -- although it is a little more involved than just training) is often focused in other areas -- the research that advances our reputations, rank, and career.

Asking faculty to divert their attention to ChatGPT when they have an article to finish is a tough sell. It is potentially a counter-productive activity depending on where in your career you are.

Why Start Up Again?

One of the things that those of us who teach writing will routinely tell students, administrators, grant-making organizations, and anyone else foolish enough to accidentally ask our thoughts on the matter, is that writing is a kind of thinking.

The process of transmitting thoughts via the written word obligates a writer to recast vague thoughts into something more concrete. And the act of doing so requires us to test those thoughts and fill in the mental gaps for the sake of their reader, who cannot follow the hidden paths thoughts will follow.

I am not sure about all of you, dear readers (at least I hope there is more than one of you), but I am in need of clearer, more detailed thought about technology these days.

Educators have been complaining about how technology makes our students more impoverished learners at least since Plato told the story of how the god Thoth's invention of writing would destroy memory.

In between the arrival of Large Language Model-based Artificial Intelligence and the imminent arrival of Augmented and Virtual Reality in the form of Apple Vision Pro, the volume of concern and complaint is once more on the rise.

I also have my concerns, of course. But I am also excited for the potential these technologies offer to assist students in ways that were once impossible.

For example, ask chat GPT to explain something to you. It will try to do so but, invariably, it will be pulling from sources that assume specialized knowledge — the same specialized knowledge that makes it difficult for students to comprehend a difficult concept.

But after this explanation is given, you can enter a prompt that begins “Explain this to a…”

Fill in that blank with some aspect of your persona. A Biology major. A football player. A theater goer. A jazz aficionado.

You can even fill in types of animals, famous figures — real or fictional (I am fond of using Kermit the Frog), or other odd entities (like Martians).

In short, ChatGPT will personalize an explanation for every difficult concept for every student.

AI and AR/VR/Spatial Computing are easy to dismiss as gimmicks, toys, and/or primarily sources of problems that committees need to address in formal institutional policies.

I am already trying to teach my students how to use ChatGPT to their benefit. There are a lot of digressions about ethics and the dangers of misuse.

But everyone agrees that these technologies will change our future. And as an English Professor, it is my job to try and prepare my students for that future as best I can.

To do that, I think I will need this space to think out loud.