pedagogy

The Best Prepared Faculty to Teach AI Skills Are Already on Your Campus

One of the questions l've seen and heard explicitly and implicitly asked of late is who is going to teach the general undergraduate student population how to use AI. Given the recent Cengage Group report that the majority of recent graduates wish they had been trained on how to use Generative AI, this is a skill colleges and universities will want to incorporate into the curriculum.

Remember: We’re looking at a general student population — not future coders. The world's departments of Computer Science are already working that problem and trying to grapple with the problem that their colleagues have created algorithms that can do much of what they are teaching their students to do.

Much, but not all.

So here’s what we need our students to learn: They need to learn how to consider a problem deeply and think through its issues. Then, they need to take what they have considered and use it to frame a prompt that consists of a well defined request that is accompanied by specific restraints that will instruct the Large Language Model how to respond.

This is what every research methods course — whether specific to a major or embedded in the Freshman Composition sequence — tries to teach its students to do.

We are not looking at a significant search for personnel or long-term re-training of those already there. They already have the skills.

They need help reimagining them.

To facilitate this re-imagination, faculty in these areas need is some basic support and training on how to incorporate Generative AI tools and tasks into their curriculum so they can move past the plagiarism question and begin to see this as an opportunity to finally get students to understand why they have to take their Composition of Methods class.

Administrators will have to figure out how to put the tools in their hands, provide them with the training they need, and how to better reward them for imparting the high tech, business-ready skills that the AI revolution is demonstrating that they provide.

At the Risk of (Briefly) Stating the Obvious

For those of you carrying an M-series iPad or Mac into the classroom and plugging into a projector, turn on Stage Manager. You will be able to keep your screen private (letting you access things that might violate FERPA or do searches on the web that make you hesitate).

While you are setting up Stage Manager, schedule a Focus Mode to turn on for class. This prevents all sorts of badly timed messages from accidentally disrupting class.

Not that I have ever done something like that to my wife.

These two small settings changes let your devices improve the quality of your life in the classroom.

Advice Instead of Just Complaining

Thus far in this blog's reboot, I have spent a good bit of time asserting that faculty need to change, adapt, and grow. But I have provided few examples and less advice on how we might do that.

In this post, I'd like to provide a place to start.

I'm an English Professor so my practical advice begins there. (Here?)

For many years, I have asked students to submit papers that required research. I spent time expansion how, where, and why to conduct that research.

With the arrival of Large Language Models like ChatGPT, I have begun to recognize I forgot to tell students something important about their research.

Here is what I am now telling them:

Whatever research they do, I can replicate. If I wanted to learn about their topics, I could go to the library and read the articles they have found. I can easily access what other scholars or journalists, or other experts have already said.

What I cannot do is find what they think about the topic.

That is what they bring to their assignments that ChatGPT never can.

And that is what I value most.

The rules of grammar and the skill of writing still matter, of course, but in an age when machines learn and compose, they need to be reminded of the central value of their voice and viewpoint -- even if they are imperfectly, partially formed.

But research should be there to support their thoughts — not replace them.

And my job is to help them better learn to express their own thoughts — not merely parrot the thoughts of others.

The challenge for all of us is that we cannot always be interested and sometimes we have to be more prescriptive so that students learn the skills they need to express themselves.. We get tired and overwhelmed. We can’t be there 100% of the time and we are often asked to care for more students than we should by administrators whose job it is to focus on the numbers.

But our students have to learn that we try.

The Fallacy Inherent in Chasing AI Plagiarism

Many years ago, Richard Pipes, a librarian at Wingate University revealed his secret to successfully in tracking down the sources used by students who plagiarize. He went to the most obvious source, he explained, because if a student were willing to put the effort to find an obscure source, they would be willing to do the hard work of writing a paper.

That was a different age, of course. The hard copy books and periodicals on the library shelves were still as equally accessible — if not still more accessible — to students as those found on the internet.

Nevertheless, his logic still holds true. A student who is actively trying to plagiarize their way out of an assignment is different from one who does not understand how and when to cite a source -- whether that confusion arises from poor preparation in their prior education or is, in part, culturally determined.

Right now, I want to set aside those students who want to do it correctly (or are at least willing to do it correctly).

Right now, I want us to consider those who are plagiarizing intentionally with malice aforethought.

Catching these students has always been and will always be a cat and mouse game. It is only when confronted in this light that practical approaches can be considered. For several years now, plagiarism detection tools have made the task of documenting their efforts easier.

But plagiarism detection services have always been hit-or-miss at best and actively problematic at worst and things have not improved with the arrival of large language models.

For those who are hoping Turnitin will save you from the threat of an AI generated paper, please know that they will almost certainly be one generation behind. At the time of writing, this means Turnitin believes it can identify work generated by ChatGPT 3.5 but is less certain it can detect work generated by ChatGPT 4.0.

It is possible for Turnitin to catch cases where ChatGPT 4.0 has been used but it comes at a cost. It increases the odds of generating false positives.

Turnitin makes a point of talking about this risk on their pages devoted to AI and what they have written there is worth reading for those trying to wrap their heads around our new normal.

I would stress one point in closing, though -- something those who are looking to the hills for Turnitin or something similar to arrive and solve your problems.

You are choosing to trust an AI with your work instead of engaging in the hard work of adjusting your pedagogy.

That formulation should give you some pause.