ChatGPT

AI and the Would-Be Author*

At the recent AI for Everyone Summit, one of the things I was asked to think about is the "write my book" tools** that have begun to make an appearance. The goal behind these tools, to offer one 'using AI correctly and ethically’ use case, is to get a rough but clean first draft done by having the AI respond to a detailed, complex prompt (We’re talking about a paragraph or two of instructions with additional material potentially uploaded to feed the LLM with more data to work from.). From there, the would-be author would proceed to the process of rewriting what had been generated to check for GenAI hallucinations and places where the would-be author needs to expand on the generated text to better capture their idea.

These tools, then, can serve as ghost writers for those who question whether they have the time, inclination, dedication, or skill to produce a vehicle*** for their idea. The complex prompt and the editing of the generated text is where the thinking part of writing takes place.

“Their idea” is where I can sense a kind of value here. If you scroll long and far enough on LinkedIn, you are almost certain to come across a post that reminds you that ideas don't have value because only the products of ideas have value. I'm sure that you, like me, can think back over the many times you have offered others (or had offered to you) good ideas that would have benefitted them only to have them not taken up and watched while others, with a similar idea, benefitted when they acted.

It's common enough to be a trope — often framed as an "I told you so" being delivered to a bungling husband by a long-suffering sit-com wife.

And if all you are looking for is to get your idea out into the world in a publication, it's hard to argue with using these tools — especially for those in the academy whose annual assessments and tenure and promotion are tied to the number of publications appearing on their c.v.

But the transfer of information from one person to another is only one of writing's purposes. Like any medium of communication, part of writing is engaging the reader and keeping them interested in whatever it is a would-be author is writing about.

During a recent livestream of 58 Keys, I asked William Gallagher for his thoughts on the GenAI tools that are appearing and if he intended to use any of them, given what they can do to an author's voice. In brief, he replied that he could see the utility of a more advanced grammar checking tool but balked at autogenerated text — including autogenerated email.

He pointed out how we, as writers, were advertising our skills with every email (joking that the advanced grammar check may result in false advertising). And he highlighted the response of another participant, who wrote in the chat "If I'm not interested in writing the message, why should I expect someone to be interested in reading it?"

That question, I think, gives hope for would-be authors, gives important guidance for those considering generated text tools, and should give pause to those who believe they can outsource writing to AI.

Using AI to write a message to a team and find a list of possible times to meet is the kind of message an AI can and should write — assuming everyone's AI Agent has access to accurate data. Asking an AI to pitch an idea or propose a solution is more risky because it doesn't pass the "why should I expect them to read it" test.

Rather than de-valuing writing, this highlights the value of good writers — people who have learned the how and why of communicating in a way that creates an expectation of interest in what's being written and why it's important.

———————

* I will be using this phrase through out this post but I ask you, gentle reader, to not read it pejoratively. To call this theoretical Individual “the user" misses the mark, I think, as it focuses us too much on the tool and not enough on the intent. "Would-be" is needed, however, because our theoretical individual has not completed the task of bringing their completed work to publication. Real world users of these tools, after all, may or may not be authors of prior works.

** I haven't experimented with a specific one at the time of writing.

*** I use "vehicle" here because there are tools that generate images (still or moving), music, presentations, computer code, and likely other forms media I don't know about. This question isn't exclusive to writing.

Writing and AI’s Uncanny Valley, Part Two

As I mentioned yesterday, the final papers I received this semester read oddly, with intermittent changes in the voice of the writer. In years past, this shift would be a sure sign of plagiarism. The occasional odd word suggested by a tutor or thesaurus isn't usually enough to give me the sense of this kind of shift. It's when whole sentences (or significant parts of sentences) shift that I feel compelled to do a spot search for the original source.

More often than not, this shift in voice is the result a bad paraphrase that's been inappropriately cited (e.g., it's in the bibliography but not cited in the text). More rarely, it's a copy/paste job.

With this semester’s final papers, I have begun to hear when students using AI appropriately are having sections of their paper "improved" in ways that changes their voice.

This matters (for those who are wondering) because our own voices are what we bring to a piece of writing. To lose one's voice through an AI's effort is to surrender that self-expression. There may be times when a more corporate voice is appropriate, but even the impersonal tone of a STEM paper has something of its author there.

To get a sense of how much was being lost when an AI was asked to improve a piece of writing, I took five posts from this blog and asked ChatGPT 4o and Google Gemini to improve them. I uploaded the fifteen files into Lexos, a textual evaluation tool developed at Wheaton College by Dr Michael Drout, Professor of English and Dr. Mark LeBlanc, Professor of Computer Science.

The Lexos tool is sufficiently powerful that I am certain that I’m not yet using it to its full capacity and learning to do so is quickly becoming a summer project. But the initial results from two of the tools were enough to make me expand my initial experiment by adding four additional texts and then a fifth.

The four texts were William Shakespeare's Henry V, Christopher Marlowe's Tamburlaine and Doctor Faustus, and W.B. Yeats' The Celtic Twilight — as found on Project Gutenberg. The first three were spur of the moment choices of texts distant enough from me in time as to be neutral. I added Yeats out of a kind of curiosity to see if my reading and rereading of his work had made a noticeable impact on my writing.

Spoilers: It hadn't — at least not at first blush. But the results made me seek out a control text. For this fifth choice, I added Gil Scott Heron's "The Revolution Will Not Be Televised" because of its hyper-focus on events and word choice of the late sixties and early seventies. This radical difference served as a kind of control for my experiment.

The first Lexos tool that hinted at something was the Dendogram visualization, which shows family tree-style relationships between texts. There are different methodologies (with impressive sounding names ) that Lexos can apply that produce variant arrangements based on different statistical models.

The Dendogram Groupings generated by Lexos.

These showed predictable groupings. Scott Heron was the obvious outlier, as was expected of a control text. The human composed texts by other authors clustered together, which I should have expected (although the close association between Henry V and Tamburlaine — perhaps driven by the battle scenes — was an interesting result). Likewise, the closer association between the ChatGPT rewrites and the originals came as no surprise, as Gemini had transformed the posts from paragraphs to bulleted lists.

What did come as a surprise was the results of the Similarity Query, which I as much stumbled across as sought out. Initially, I had come to Lexos looking for larger, aggregate patterns rather than looking at how a single text compared with the others.

It turned out  the Similarity Queries were the results that showed the difference between human-written text and machine generated text.

Similarity Query for Blog Post Zero. The top of the letters for “The Revolution Will Not Be Televised” can be barely seen at the bottom of the list.

Gil Scott Heron remained the outlier, as a control text should.

The ChatGPT 4o rewrite of any given post was listed as the closest text to the original, as one would expect.

What I did not expect was what came next. in order, what appeared was:

  • The non-control human texts.

  • The ChatGPT texts.

  • The Gemini texts.

The tool repeatedly marked the AI-generated text as different and like itself rather than being like a human.

Here, Lexos dispassionately quantifies what I experienced while reading those essays. The changes made by generative AI change the voice of the writer, supplanting that voice with its own.

This has serious implications for writers, editors, and those who teach them.

It also has implications for the companies that make these AI/LLM tools.

I will discuss some of that in tomorrow’s post.

Advice Instead of Just Complaining

Thus far in this blog's reboot, I have spent a good bit of time asserting that faculty need to change, adapt, and grow. But I have provided few examples and less advice on how we might do that.

In this post, I'd like to provide a place to start.

I'm an English Professor so my practical advice begins there. (Here?)

For many years, I have asked students to submit papers that required research. I spent time expansion how, where, and why to conduct that research.

With the arrival of Large Language Models like ChatGPT, I have begun to recognize I forgot to tell students something important about their research.

Here is what I am now telling them:

Whatever research they do, I can replicate. If I wanted to learn about their topics, I could go to the library and read the articles they have found. I can easily access what other scholars or journalists, or other experts have already said.

What I cannot do is find what they think about the topic.

That is what they bring to their assignments that ChatGPT never can.

And that is what I value most.

The rules of grammar and the skill of writing still matter, of course, but in an age when machines learn and compose, they need to be reminded of the central value of their voice and viewpoint -- even if they are imperfectly, partially formed.

But research should be there to support their thoughts — not replace them.

And my job is to help them better learn to express their own thoughts — not merely parrot the thoughts of others.

The challenge for all of us is that we cannot always be interested and sometimes we have to be more prescriptive so that students learn the skills they need to express themselves.. We get tired and overwhelmed. We can’t be there 100% of the time and we are often asked to care for more students than we should by administrators whose job it is to focus on the numbers.

But our students have to learn that we try.

The Sky is Still Falling (Long Term)

Before returning to some of the technical and pedagogical issues involved with AI in the classroom, it is worth understanding some of the personal and personnel aspects of all this. Without understanding these concerns, a full appreciation of the existential threat AI presents to the academy in general and the professoriate in particular can get lost in the shuffle while people focus on academic dishonesty and the comedy that can ensue when ChatGPT gets something wrong.

A few data points:

It has not been long since a student at Concordia University in Montreal discovered the professor teaching his online Art History class had been dead for two years.

Not only are Deep Fakes trivially easy to create, 3D capture tools are making it easy for anyone to make full-body models of subjects.

You can now synthesize a copy of your own voice on a cell phone.

We can digitally clone ourselves.

You can guess where this is going.

Many years ago (2014, for those recording box scores), I told a group of faculty that the development of good online teaching carried with it an inherent risk -- the risk of all of us becoming TAs to rock star teachers. When I explained this, I told my audience that, while I considered myself a good teacher, I had (as a chair) observed and (as a student) learned from great teachers.

I asked then and sometimes ask myself now: What benefit could I, and JCSU, offer to students signing up for my class that outweighed the benefit of taking an online class with that kind of academic rock star?

I still don't feel I have a compelling answer for that question.

Now, in addition to competing with the rock stars of the academy, there is a new threat.  it is now simple enough to create an avatar -- perhaps one of a beloved professor or revered figure (say, Albert Einstein or Walter Cronkite) and link it to a version of ChatGPT or Google Bard that has been taught by a master teacher how to lead a class — a scenario discussed in a recent Future Trends Forum on “Ethics, AI, and the Academy".

How long until an Arizona State reveals a plan for working it into their Study Hall offering?

AI may not be ready for prime time because it can still get things wrong.

But, then again, so do I.

The pieces necessary to do that kind of thing have been lying around since 2011. Now, even the slow-moving academy is beginning to pivot in that direction.

Chat GPT: Fear and Loathing

I wanted to spend some time thinking through the fear and loathing ChatGPT generates in the academy and what lies behind it. As such, this post is less a well written essay than it is a cocktail party of ideas and observations waiting for a thesis statement to arrive.

Rational Concerns

While I have already mentioned (and will mention again below), the academy tends to be a conservative place. We change slowly because the approach we take has worked for a long time.

A very long time.

When I say a long time, some of the works by Aristotle that are studied in Philosophy classes are his lecture notes. I would also note we insist on dressing like it is winter in Europe several hundred years ago — even when commencement is taking place in the summer heat of the American south.

While faculty have complained about prior technological advances (as well as how hot it gets in our robes), large language models are different. Prior advances -- say, the calculator/abacus or spell check -- have focused on automating mechanical parts of a process. While spell check can tell you how to spell something, you have to be able to approximate the word you want for the machine to be able to help you.

ChatGPT not only spells the words. it can provide them

In brief, it threatens to do the thinking portion for its user.

Now, in truth, it is not doing the thinking. It is replicating prior thought by predicting the next word based on what people have written in the past. This threatens to replace the hard part of writing -- the generation of original thought -- with its simulacrum.

Thinking is hard. It's tiring.it requires practice.

Writing is one of the places where it can be practiced.

The disturbing thing about pointing out that this generation of simulacra by students, however, is that too many of our assignments ask students to do exactly that. Take, for example, an English professor who gives their students a list of five research topics to choose from.

Whatever the pedagogical advantages and benefits of such an approach, it is difficult to argue that such an assignment is not asking the student to create such a simulacrum of what they think their professor wants rather than asking them to generate their own thoughts on a topic that they are passionate about.

It is an uncomfortable question to have to answer: How is what I am asking of the students truly beneficial and what is the "value add" that the students receive from completing it instead of asking ChatGPT to complete it?

Irrational Concerns

As I have written about elsewhere, faculty will complain about anything that changes their classroom. The massive adjustments the COVID-19 pandemic forced on the academy produced much walling and gnashing of teeth as we were dragged from the 18th Century into the 21st. Many considered retirement rather than having to learn and adjust.

Likewise, the story of the professor who comes to class with lecture notes, discolored by age and unupdated since their creation, is too grounded in reality to be ignored here. (Full disclosure: I know I have canned responses, too. For each generation of students, the questions are new -- no matter how many times I have answered them before.)

Many of us simply do not wish to change.

Practical Concerns

Learning how to use ChatGPT, and thinking through its implications takes time and resources. Faculty Development (training, for those in other fields -- although it is a little more involved than just training) is often focused in other areas -- the research that advances our reputations, rank, and career.

Asking faculty to divert their attention to ChatGPT when they have an article to finish is a tough sell. It is potentially a counter-productive activity depending on where in your career you are.

Why Start Up Again?

One of the things that those of us who teach writing will routinely tell students, administrators, grant-making organizations, and anyone else foolish enough to accidentally ask our thoughts on the matter, is that writing is a kind of thinking.

The process of transmitting thoughts via the written word obligates a writer to recast vague thoughts into something more concrete. And the act of doing so requires us to test those thoughts and fill in the mental gaps for the sake of their reader, who cannot follow the hidden paths thoughts will follow.

I am not sure about all of you, dear readers (at least I hope there is more than one of you), but I am in need of clearer, more detailed thought about technology these days.

Educators have been complaining about how technology makes our students more impoverished learners at least since Plato told the story of how the god Thoth's invention of writing would destroy memory.

In between the arrival of Large Language Model-based Artificial Intelligence and the imminent arrival of Augmented and Virtual Reality in the form of Apple Vision Pro, the volume of concern and complaint is once more on the rise.

I also have my concerns, of course. But I am also excited for the potential these technologies offer to assist students in ways that were once impossible.

For example, ask chat GPT to explain something to you. It will try to do so but, invariably, it will be pulling from sources that assume specialized knowledge — the same specialized knowledge that makes it difficult for students to comprehend a difficult concept.

But after this explanation is given, you can enter a prompt that begins “Explain this to a…”

Fill in that blank with some aspect of your persona. A Biology major. A football player. A theater goer. A jazz aficionado.

You can even fill in types of animals, famous figures — real or fictional (I am fond of using Kermit the Frog), or other odd entities (like Martians).

In short, ChatGPT will personalize an explanation for every difficult concept for every student.

AI and AR/VR/Spatial Computing are easy to dismiss as gimmicks, toys, and/or primarily sources of problems that committees need to address in formal institutional policies.

I am already trying to teach my students how to use ChatGPT to their benefit. There are a lot of digressions about ethics and the dangers of misuse.

But everyone agrees that these technologies will change our future. And as an English Professor, it is my job to try and prepare my students for that future as best I can.

To do that, I think I will need this space to think out loud.