Burning the Boats

There are a host of versions of the story in myth, history, and fantasy literature. The hero of the story makes the point to his followers that there’s no turning back by burning the boats they arrived in. Two weekends ago, I did something similar.

I erased all my settings on my M2 iPad Pro and restored it for my child’s use.

I'm now all in on the iPad Mini A17 Pro.

Well, at least as all in as someone with an iPhone can claim to be all in any device.

I'm not doing this as some kind of stunt, and there are absolutely limitations I am placing on myself — although most are inconveniences rather than true limitations. The biggest limitation is, unsurprisingly, driven by the reduced screen real estate — the very reduction that makes the Mini the Mini.

But as I've written before, there is something about the Mini’s form factor that appeals enough to make me choose it over more "powerful" devices. And if writing with an Apple Pencil slows me down a little, my writing feels better for the enforced deliberation — which feels like a feature rather than a bug when having to respond to some emails.

This isn't to say I've abandoned all keyboards. I have access to ones at work and home, and I still have the first-generation Keys-To-Go in my bag. So if I need the additional speed a keyboard provides, I can access it — or tap a virtual button and begin dictating.

But when possible, I am reaching for the Apple Pencil Pro.

The last word in that sentence, incidentally, is at the center of a post I intend to write soon. You see, despite what has been written and talked about, I have the sneaking suspicion that Apple isn't holding back an iPad Mini Pro.

I think that, despite pundits pronouncements to the contrary, this may be an iPad Mini Pro.

Scoring My Hopes/Predictions

Apple dropped a press release announcing the iPad Mini 7 today. While my own purchase will have to wait a little bit, I am planning to get one sooner rather than later.

Press releases seldom cover all details (e.g., will the old , but there is enough in the text for me to at least see how my hope-casting back on September 15 did. (Spoilers: I was in the ballpark but probably in the shallow outfield rather than in the infield.)

I think the A18 and A18 Pro chips that went into the latest iteration of the iPhones will make their way into the base iPad and iPad Mini so that the entire iPad line will be able to use Apple Intelligence. I am also willing to bet that the "mighty" iPad Mini will get the Pro version of the chip.

My hope for the more powerful chip did not come to fruition here — although the 3 nanometer A17 Pro is no slouch. That said, my underlying theory that they would make sure the iPad Mini 7 is capable of using Apple Intelligence was correct. If I had been playing the kind of game that is played on Connected, I would have been wiser to say the chip would be capable of Apple Intelligence rather than calling the chip by name.

Next time, I’ll listen to my gut on that one.

That doesn't guarantee that the Mini will get external monitor support but I suspect that is coming, too.

The press release doesn’t say one way or another and there is no mention of this kind of support on the tech sheet, so I suspect my hopes here will be dashed. Until I can get my hands on one (or a reviewer gets their hands on one), there is still theoretically room for hope but it’s the kind of theoretical hope I have for winning the lottery today.

The move to the Apple Pencil Pro (with the required shift in camera placement) also makes sense if Apple wants to fully shift to a two Pencil future ( Pencil — currently called USB-C — and Pro).

I got this one correct! Well, mostly. The front-facing camera has not moved. I would have thought this move would be driven by keeping supply and manufacturing simpler but that thought probably tells you more about me not being an engineer than anything else.

I am really looking forward to the capabilities of the Apple Pencil Pro as a tool for composition on the iPad Mini. My guess is that the revised pallet that appears with the squeeze gesture will make a significant difference.

I wouldn't be surprised if the event's name is “Mighty”, given the rumors of a redesigned Mac Mini. If the HomePod Mini is given some stage time with an Apple Intelligence upgrade to highlight how Siri and Home are moving into the future, it would produce a neat narrative while showing off the prowess of Apple's engineers and what kind of technology they can fit into a small space.

A press release is not an event and the press release didn’t even use the word Mighty. Did they not listen to Tim Cook at the last event?

Given the nature of the iPad Mini’s upgrades, I do wonder why they didn’t add it in to that event. I’m not convinced that it would have cannibalized any sales from the Air and Pro. Perhaps the October Mac event is getting shuffled, crowded, or in some other way adjusted and the Mini was the easiest device to shift to a press release.

Looking Forward to October

Unlike the professional Apple pundits, like the team at Connected — (who are in the midst of their annual fundraiser for Saint Jude's/), there is no cost or benefit associated with any predictions I make about Apple products. And, as I've admitted in the past, there's as much wishing things into existence when I do this as there is true analysis.

That said, I have some thoughts about October.

I've written before about my belief about what I think (read: want) from, the next iPad Mini. I think the recent iPhone announcement has hinted that what I want is about to arrive.

I think the A18 and A18 Pro chips that went into the latest iteration of the iPhones will make their way into the base iPad and iPad Mini so that the entire iPad line will be able to use Apple Intelligence. I am also willing to bet that the "mighty" iPad Mini will get the Pro version of the chip.

That doesn't guarantee that the Mini will get external monitor support but I suspect that is coming, too.

The move to the Apple Pencil Pro (with the required shift in camera placement) also makes sense if Apple wants to fully shift to a two Pencil future ( Pencil — currently called USB-C — and Pro).

I wouldn't be surprised if the event's name is “Mighty”, given the rumors of a redesigned Mac Mini. If the HomePod Mini is given some stage time with an Apple Intelligence upgrade to highlight how Siri and Home are moving into the future, it would produce a neat narrative while showing off the prowess of Apple's engineers and what kind of technology they can fit into a small space.

And it I'm wrong? Odds are you'll have forgotten I wrote this by then.

The Man in Black Has Not Entered the Building

Let me explain how a pedagogical imperative drove a wardrobe change for my first day of classes this year.

Oscar Wilde isn't the only person to have spoken about the theory of the Mask but he's the one I reference when explaining it to students because of how he embodied it in his life and how well it is captured in the memorial statue erected to his memory in Dublin.

In brief, the theory states that we change the face we show to the world based on where we are and who we are supposed to be within that context. This should not be mistaken for being fake. We all act differently when we are at work (where I am the professorial Dr. DeForrest), out with friends (where I am usually the good- natured Matt), and at home (where I am partaking of the role of husband, father, cook, and handyman). Each identity has much in common with the others but they dress, sound, and speak differently enough that they would appear like a stranger to someone who know me from elsewhere.

My students at Charlotte, NC's only HBCU are often aware of the conscious to semi-conscious way their community has spoken of of this behavior as code switching but are unaware how often they change their masks and are actively surprised to learn other communities face challenges that parallel their own.

Prior to this year, I leveraged the power of the Mask to set an immediate tone for my classes by arriving dressed all in black and only gradually adding color to my wardrobe over the course of a week. The theory that drove my sartorial choice was that the darker colors lent a formality and gravity to the first class meeting and helped students immediately take the class and themselves more seriously. Students who had been in my classes before, and understood the fuller picture of who Dr. DeForrest was and is, enjoyed being ‘in the know’ and watching the reactions my appearance generated. My colleagues were, in general, similarly amused. But even those advocated for other approaches conceded they could not argue with my results.

This year, I didn't dress this way.

Part of the power that an all black outfit draws upon is its echo of the priesthood and of death. This generation of students was scarred by a global pandemic.

They don’t need a reminder — conscious or subconscious — of death.

The students who come to campus now are more serious about their work than before. Their disrupted and distorted academic journey* has left them more in need of a guide than someone who is trying to get them to see their college experience as something other than Animal House or House Party or whatever equivalent kids today are imagining.

So the Man in Black did not come to campus this year. There was a pedagogical need for him to stay away.

I just wish I felt more certain about what kind of guide they need.

  • Don't fall into the trap of thinking today’s students learned less during the COVID-19 pandemic than their predecessors because they do less well on measurements designed before our lives had been disrupted. They learned different lessons during lockdown and came away understanding and knowing other things.

"What's a computer?"

Apple caused a lot of consternation amongst the tech world's equivalent of the chattering classes (On a good day, I include myself in that group.) with the question asked at the end of this ad. For many, the question was one that challenged the preconceived notion of the form factor: Could something without a fixed and attached keyboard and a lot of I/O options really be a computer?

Some probably typed this on their lovingly crafted mechanical keyboards while their laptop was docked in clamshell mode, not giving it a second thought.

For others, the question was one of specs: Could anything with so little RAM or storage or CPU/GPU power really be a computer?

They probably didn't stop to consider that their cellphone's specs are superior to what NASA used to travel to the moon and that Voyager has less computational power than their car's key fob.

For still others, it’s about the software capabilities. Can anything that is incapable of running intensive desktop computing software really be considered a computer?

They probably didn't stop to ask if the real computer they used three to five years before could comfortably run the latest version of the program they are using as a benchmark.

I'm not making these observations to poke fun of the nameless, faceless strawmen I have set up to point at derisively. Rather, I think their objections are critical for understanding what I am beginning to explore in response to that ad's question.

“What’s a computer?”

The ad’s answer, according to the graphics that appear on the screen, includes an iPad Pro running iOS 11 — a device and operating system those of us living the iPadOS lifestyle might consider limited in the same way my strawmen might attribute to our iPads.

But the point of the ad, along with ads like “Homework” —an ad that haunts me because it highlights what I fear are my own pedagogical shortcomings, is that the computer must serve a purpose if it’s going to be real.

I suspect many consumers generally “get it” — whether they are buying an iPad, a Mac, or a Windows machine. It's about what the device can (or can't) do for them and what they're comfortable with.

The iPad Mini I am writing this on right now is, for my immediate need, more powerful than the most impressively tricked out Mac Pro because the Mac Pro doesn't support Scribble or the Apple Pencil. And while there's no question that the new M4 iPad Pro outperforms my Mini, the Mini's form factor still delights me more than it would.

So what's a computer? It's a tool — a tool that can only be measured by its utility to the user and not an abstract set of specs and form factors.

My preference for the Mini comes with clear trade offs. The smaller size that, for reasons I cannot explain, I prefer can feel cramped at times and is less forgiving when dealing with online meetings. And I will need to post this via my iPad Pro because Squarespace doesn’t trust the Mini with hyperlinks. But I get more worthwhile (Your opinion may differ.) writing done on it with my Apple Pencil than I do on my iPad Pro with its excellent Magic Keyboard. And I have noticed I actively dislike the thought of using a traditional computer of any manufacture.

The girl in the ad's question is one every user looking at a new device should ask. What, for them, is a computer? And how open are they to change?

The Best Prepared Faculty to Teach AI Skills Are Already on Your Campus

One of the questions l've seen and heard explicitly and implicitly asked of late is who is going to teach the general undergraduate student population how to use AI. Given the recent Cengage Group report that the majority of recent graduates wish they had been trained on how to use Generative AI, this is a skill colleges and universities will want to incorporate into the curriculum.

Remember: We’re looking at a general student population — not future coders. The world's departments of Computer Science are already working that problem and trying to grapple with the problem that their colleagues have created algorithms that can do much of what they are teaching their students to do.

Much, but not all.

So here’s what we need our students to learn: They need to learn how to consider a problem deeply and think through its issues. Then, they need to take what they have considered and use it to frame a prompt that consists of a well defined request that is accompanied by specific restraints that will instruct the Large Language Model how to respond.

This is what every research methods course — whether specific to a major or embedded in the Freshman Composition sequence — tries to teach its students to do.

We are not looking at a significant search for personnel or long-term re-training of those already there. They already have the skills.

They need help reimagining them.

To facilitate this re-imagination, faculty in these areas need is some basic support and training on how to incorporate Generative AI tools and tasks into their curriculum so they can move past the plagiarism question and begin to see this as an opportunity to finally get students to understand why they have to take their Composition of Methods class.

Administrators will have to figure out how to put the tools in their hands, provide them with the training they need, and how to better reward them for imparting the high tech, business-ready skills that the AI revolution is demonstrating that they provide.

More on the Problem of “Pro”

I was watching another tech YouTube video on computer and/or iPad peripherals the other day (I can stop any time I want.). In the video, the presenter was describing a 7-in-One usb-c dongle that had all the ports he and any other pro users might need.

  • I'm not sure why this video struck me more forcefully than any of the others I've watched, but his focus on the utility provided by the SD-card and micro SD-card slots leapt out as two facts immediately presented themselves:

  • I can't remember the last time I used as SD-Card. Yes, I can appreciate how important they are for podcasters and YouTubers and have considered them as an external storage option, but I have never found a need to use them.*

I have needed the VGA connection on my Satechi Munliport Adapter twice in the past year while presenting at universities in Europe and the US. The VGA connection, in fact, was the “Pro” feature that made me go with this model rather than some of the more streamlined ones.

I have written before on how “Pro” means different things to different people. What strikes me as remarkable is how often we confuse general purpose devices for tailored machines and knock them for providing peripherals that serve our specific needs — as if peripherals (now often connected via dongles) were signs of a device's limitations rather than their adaptability.

* Yes, I know: Having published that, I will need to use one at some point in the coming days.

Apple Intelligence — Hiding in Plain Sight

I haven't installed the iOS or iPadOS 18 beta software. This will come as a surprise to no one. After all, I'm not a developer. I'm not a reviewer. I'm not a Podcaster, YouTuber, or similar Creative who needs to generate content on a regularly scheduled basis.

But I am interested. So, I read, listen, and watch the material being created by reviewers, podcasters, and YouTubers.

Given the public interest in AI, I can understand why these creators keep their focus on Apple Intelligence and whether any signs of it have appeared in the betas. It's their job to let us know if it has or hasn't.

What I am trying to think through is what to make of what often follows in these beta reports: Updates on what new machine learning features have arrived.

While these features are not part of what has been branded as Apple Intelligence. But they are features that draw on artificial intelligence.

I bring this up not to try and shame the content creators struggling to keep up with a fast changing story. By and large, they are doing good work. Rather, I want to highlight how the most significant features and changes AI will bring may be invisible to users.

For those of us trying to make sense of a future that includes generative AI, LLMs, and other machine learning advances, trying to capture these changes clearly for our audiences while various corporations and scholarly communities introduce language that segments the field is no simple task. Nor is it an insignificant one. Trying to explain to colleagues that they should be attentive to these developments involves understanding a continuum of technology — one that spell check and predictive text already has them on — obligates them to grapple with the fuzzy lines branding draws.

I'd love to conclude with a neat and tidy solution as to how to make it clear and comprehensible that Scribble (which I am using to write this post), text smoothing (available in the iPadOS 18 betas), and Apple Intelligence are connected yet distinct. If I could do that, I would be more able to tease out how and where Generative AI could be best employed land best not employed) in the brainstorming, organizing, outlining, drafting, editing, proofreading, publishing continuum of the writing process as a tool for creation and learning.

Creating and learning to create are two very different things. And I absolutely believe that going back to Blue Books is not the answer. Don't laugh. I know colleagues who have been advocating for that for well over a decade. Several years ago, it was because BlueBooks keep students from accessing the internet while they write, making our assessment results more neat even if they are likely to never exist in a world where they can’t access the internet. Now, of course, it’s to make sure they don’t hand in generated text.

But if we aren't going back to Blue Books, and we want to keep a general public informed about what AI can (and can’t) do, we have to figure out how to make the differences between the expressions of machine learning more approachable.

AI and the Would-Be Author*

At the recent AI for Everyone Summit, one of the things I was asked to think about is the "write my book" tools** that have begun to make an appearance. The goal behind these tools, to offer one 'using AI correctly and ethically’ use case, is to get a rough but clean first draft done by having the AI respond to a detailed, complex prompt (We’re talking about a paragraph or two of instructions with additional material potentially uploaded to feed the LLM with more data to work from.). From there, the would-be author would proceed to the process of rewriting what had been generated to check for GenAI hallucinations and places where the would-be author needs to expand on the generated text to better capture their idea.

These tools, then, can serve as ghost writers for those who question whether they have the time, inclination, dedication, or skill to produce a vehicle*** for their idea. The complex prompt and the editing of the generated text is where the thinking part of writing takes place.

“Their idea” is where I can sense a kind of value here. If you scroll long and far enough on LinkedIn, you are almost certain to come across a post that reminds you that ideas don't have value because only the products of ideas have value. I'm sure that you, like me, can think back over the many times you have offered others (or had offered to you) good ideas that would have benefitted them only to have them not taken up and watched while others, with a similar idea, benefitted when they acted.

It's common enough to be a trope — often framed as an "I told you so" being delivered to a bungling husband by a long-suffering sit-com wife.

And if all you are looking for is to get your idea out into the world in a publication, it's hard to argue with using these tools — especially for those in the academy whose annual assessments and tenure and promotion are tied to the number of publications appearing on their c.v.

But the transfer of information from one person to another is only one of writing's purposes. Like any medium of communication, part of writing is engaging the reader and keeping them interested in whatever it is a would-be author is writing about.

During a recent livestream of 58 Keys, I asked William Gallagher for his thoughts on the GenAI tools that are appearing and if he intended to use any of them, given what they can do to an author's voice. In brief, he replied that he could see the utility of a more advanced grammar checking tool but balked at autogenerated text — including autogenerated email.

He pointed out how we, as writers, were advertising our skills with every email (joking that the advanced grammar check may result in false advertising). And he highlighted the response of another participant, who wrote in the chat "If I'm not interested in writing the message, why should I expect someone to be interested in reading it?"

That question, I think, gives hope for would-be authors, gives important guidance for those considering generated text tools, and should give pause to those who believe they can outsource writing to AI.

Using AI to write a message to a team and find a list of possible times to meet is the kind of message an AI can and should write — assuming everyone's AI Agent has access to accurate data. Asking an AI to pitch an idea or propose a solution is more risky because it doesn't pass the "why should I expect them to read it" test.

Rather than de-valuing writing, this highlights the value of good writers — people who have learned the how and why of communicating in a way that creates an expectation of interest in what's being written and why it's important.

———————

* I will be using this phrase through out this post but I ask you, gentle reader, to not read it pejoratively. To call this theoretical Individual “the user" misses the mark, I think, as it focuses us too much on the tool and not enough on the intent. "Would-be" is needed, however, because our theoretical individual has not completed the task of bringing their completed work to publication. Real world users of these tools, after all, may or may not be authors of prior works.

** I haven't experimented with a specific one at the time of writing.

*** I use "vehicle" here because there are tools that generate images (still or moving), music, presentations, computer code, and likely other forms media I don't know about. This question isn't exclusive to writing.

AI, Copyright, and the Need for Governmental Action

Federico Viticci and John Voorhees of MacStories have released an open letter to EU and US lawmakers and regulators with their concerns over the way that most Large Language Models have been trained. In brief, they point out an obvious and undeniable truth: That any model that has been trained on the open web using in-copyright text is intellectual property theft.

What they have written is self-evidently true and it is time for those who can act to act.

Recent comments by Mustafa Suleyman, Microsoft’s CEO for AI demonstrate that companies, in their rush to get to market, have not even begun to think through the implications of what they are doing.

While I remain hopeful for what AI will be able to do in the future, it is clear that we have a lot of work to do in the present.

There is no question in my mind that those who have scraped copyrighted material need to either license that material or rebuild their models from the ground up based on licensed and out of copyright material.

The smartest companies will get out ahead of this. The least ethical will try to weather the storm or find a buyer and walk away before it hits.

Evolution of the Desk?

Every now and again, I am reminded of the Harvard Innovation Lab's video that captures the way the computer has reduced the number and kind of items we keep on our desks. And while any given version of the video will have commenters chime in about visual fallacies in the video, its message remains clear: Digital Technology has transformed the way we live and work.

As I sit here in a recliner with my iPad Mini resting on a lap desk as I write this with an Apple Pencil, what strikes my imagination most forcefully about the video are the assumptions we don't question about the changes that took place between 1980 and 2014 — now, as Dr. Robbie Melton pointed out at the AI for All summit, a decade ago....

If I may borrow Apple's formulation of the question, “What’s a desk?”

It's a less trivial question than it may sound, as CPG Grey pointed out during the pandemic. And if the advice he offers is targeted at mentally surviving the pandemic, its lessons remain applicable post- pandemic. We function best when we compartmentalize our lives. When we have a space we can dedicate to parts of our lives, it helps us accomplish what we want and need to do.

So, what is a desk? What is it for? And why, if we have transformed them in the way the Harvard Innovation Lab has suggested, why do they keep collecting stuff?

It has to be something more than the companies that design and manufacture the desk systems so popular with tech YouTubers (and, based on my watch history, with me, too). Even in the absence of such systems, we decorate the space with chatchkies, mathoms, and items we need to get to and are almost certain to sometime very soon.

What I am feeling that I should think through is the relationship between one desktop (physica) and the other (digital). Both are places I have invested time in personalizing and optimizing. But I’m increasingly conscious this summer as I move between my desk, the kitchen table, Alice Jules Coffee Shop, my campus office, and this recliner, that my physical and digital desktops lack a kind of connection that it feels like they should.

I want to spend some time considering that disconnect and if it matters.I have a sneaking suspicion it has something to do with the keyboard and how it, a pointer device, and an external monitor (and access to an outlet) have begun to define a desk in a way that isn't demanded by my iPad Mini.

What is it I an actually asking of the horizontal working surface and its associated storage?

Why I Suspect there is an Absence of Photorealism in Apple's AI-Generated Images

I've read some commentary here and there about how the images Apple Intelligence generates are insufficiently photorealistic. One member of the MacStories' Discord (apologies for the lack of credit where credit’s due — there’s been a lot of discussion there) suggested that the image quality, in this regard, might improve in time as Apple’s models ingest more and develop further.

I observed there that I suspected the choices Apple made about the kinds of images generated by its Machine Learning might be intentional. With machine learning tools that remove people and objects in the background coming to Photos, Apple is showing its models are capable of photorealistic renders.

So why might they not permit Image Playground to do the same?

I can see where some of this may be limitations dictated by on-device computational power. While I’m no engineer,  I would guess fixing/repairing a of a photograph seems like it would be asking less of a Neural Engine than creating a full photo. Even so, the use of cloud computing would be an easy enough way to get around this.

Rather, I suspect it has everything to do with user responses to the generated images and, sadly, the motivations some users might have. Transforming a prompt into an image that is obviously not real is usually more than enough to meet most users' needs. The creation of a gazebo for a Keynote, to use Apple's example, is one where a visual concept is being communicated and that communication does not suffer from a lack of photorealism.

But there are cases where people can and will suffer from the malicious deployment of photorealistic renders. Indeed, in a rare case of high profile bipartisanship, The US Congress is (Finally!) moving to criminalize deepfake revenge porn. Senators Ted Cruze (R) is working with Amy Klobuchar (D), who both were candidates for their party's nomination for the presidency, find easy common ground here.

As well they should. it has been reported that middle schoolers in Florida and California (and, I suspect, elsewhere) — a demographic seldom used in profiles of good sense and sound decision-making —have learned they can use AI to generate photorealistic nudes of their classmates.

There's a reason we find The Lord of the Flies plausible — even if the historical event turned out for better than fiction.

It's the kind of problem that even a n AI enthusiastic techbro focused on making the Star Trek computer or their own personal Jarvis should have seen coming because it always seems to happen.

It was obvious enough to come up in a meeting at Apple.

And sidestepping photorealism is an obvious solution.

Keeping their Image Playground in check in this regard makes good sense and is, I would argue, protecting its users from those who might use the its technology for malicious purposes.

Writing and AI’s Uncanny Valley, Part Three

We respond to different kinds of writing differently. We do not expect a personal touch, for example, from a corporate document — no matter how much effort a PR team puts into it. We know the brochure or email we are reading (choose your type of organization here) is not tailored for us — even when a mail merge template uses our name.

Or perhaps I should say especially when it uses our name and then follows it with impersonal text.

But a direct message from someone we know, whether personally or professionally? We expect that to be from the person writing us.

And sound like it.

This is where AI-assisted writing will begin to differentiate itself from the writing we have engaged with up to now. If it is a corporate document — the kind of document we expect to have come from a committee then pass through many hands, no one will blink if they detect AI. I suspect that this is where AI-assisted writing will flourish and be truly beneficial.

To explain why, let me offer a story from a friend who used to work in a financial services company. Back in the 80s, some of their marketing team started using a new buzz word and they wanted to incorporate it into their product offerings, explaining to the writing team that they wanted to tell their older clients how they intended to be opportunistic with the money they had given them to invest.

The writers in the room, imagining little old ladies in Florida who preferred the Merriam Webster Dictionary definitions to whatever market-speak was the flavor of the month, were horrified and asked if the marketers understood that ‘opportunistic’ meant “exploiting opportunities with little regard to principle.” The marketers, oblivious to how their audience would near that word, had been prepared to tell their company’s clients they wanted to take advantage of them.

AI is simultaneously well positioned to catch mistakes like that and assist teams in fixing it and to manufacture such mistakes, which will need to be caught by the same teams — all based on the data (read: writing) it was trained on. I stress assist here because even the best generative AI needs to have its work looked over in the same way any human writer does. And because they are statistical models of language, they are subject to making the same kinds of mistakes the marketers I wasn’t mentioning did.

AIs will also be beneficial as first-pass editors, giving writers a chance to see what grammar and other issues might need fixing. I don’t expect to see them replacing human editors any time soon, as the skill to integrate changes into a text in someone else’s voice is a challenging skill to develop.

Personal correspondence, whether it is an email or a letter, and named-author writing will be the most challenging writing to integrate with generative AI. While lower level AI work of the kind we are currently used to — spellcheck and grammar checking — will continue to be tools that writers can freely use. I haven't downloaded Apple’s iPadOS 18 beta software but will be interested to see how it performs in that regard.

The kind of changes generative AI produces runs the risk of eroding the trust of the reader, whether they can put their finger on why they are uncomfortable with what they are reading or not.

Yes, there is a place for generative AI in the future of writing. I suspect the two companies best positioned to eventually take the lead are Apple (which is focusing its efforts on device, where it can learn the voice of a specific user ) and Google (which has been ingesting its users’ writing for some time, even it it has been for other purposes). Microsoft's Office suite could be similarly leveraged in the enterprise space but I don't have the sense people turn to it for personal writing.

That may tell you more about me than the general population.

These three usual suspects and whatever startups that believe they are ready to change our world will need to learn more about how to focus large language models on individual users more broadly. Most text editors can already complete sentences in our voices. The next hurdle will be getting these tools to effectively compose longer texts.

If, in fact, that is what we decide we want.*

———————-

* Not go full 1984 on you, but I do wonder how the power to shift thought through language might be used and abused by corporations and nation states through a specifically trained LLM.

Writing and AI’s Uncanny Valley, Part Two

As I mentioned yesterday, the final papers I received this semester read oddly, with intermittent changes in the voice of the writer. In years past, this shift would be a sure sign of plagiarism. The occasional odd word suggested by a tutor or thesaurus isn't usually enough to give me the sense of this kind of shift. It's when whole sentences (or significant parts of sentences) shift that I feel compelled to do a spot search for the original source.

More often than not, this shift in voice is the result a bad paraphrase that's been inappropriately cited (e.g., it's in the bibliography but not cited in the text). More rarely, it's a copy/paste job.

With this semester’s final papers, I have begun to hear when students using AI appropriately are having sections of their paper "improved" in ways that changes their voice.

This matters (for those who are wondering) because our own voices are what we bring to a piece of writing. To lose one's voice through an AI's effort is to surrender that self-expression. There may be times when a more corporate voice is appropriate, but even the impersonal tone of a STEM paper has something of its author there.

To get a sense of how much was being lost when an AI was asked to improve a piece of writing, I took five posts from this blog and asked ChatGPT 4o and Google Gemini to improve them. I uploaded the fifteen files into Lexos, a textual evaluation tool developed at Wheaton College by Dr Michael Drout, Professor of English and Dr. Mark LeBlanc, Professor of Computer Science.

The Lexos tool is sufficiently powerful that I am certain that I’m not yet using it to its full capacity and learning to do so is quickly becoming a summer project. But the initial results from two of the tools were enough to make me expand my initial experiment by adding four additional texts and then a fifth.

The four texts were William Shakespeare's Henry V, Christopher Marlowe's Tamburlaine and Doctor Faustus, and W.B. Yeats' The Celtic Twilight — as found on Project Gutenberg. The first three were spur of the moment choices of texts distant enough from me in time as to be neutral. I added Yeats out of a kind of curiosity to see if my reading and rereading of his work had made a noticeable impact on my writing.

Spoilers: It hadn't — at least not at first blush. But the results made me seek out a control text. For this fifth choice, I added Gil Scott Heron's "The Revolution Will Not Be Televised" because of its hyper-focus on events and word choice of the late sixties and early seventies. This radical difference served as a kind of control for my experiment.

The first Lexos tool that hinted at something was the Dendogram visualization, which shows family tree-style relationships between texts. There are different methodologies (with impressive sounding names ) that Lexos can apply that produce variant arrangements based on different statistical models.

The Dendogram Groupings generated by Lexos.

These showed predictable groupings. Scott Heron was the obvious outlier, as was expected of a control text. The human composed texts by other authors clustered together, which I should have expected (although the close association between Henry V and Tamburlaine — perhaps driven by the battle scenes — was an interesting result). Likewise, the closer association between the ChatGPT rewrites and the originals came as no surprise, as Gemini had transformed the posts from paragraphs to bulleted lists.

What did come as a surprise was the results of the Similarity Query, which I as much stumbled across as sought out. Initially, I had come to Lexos looking for larger, aggregate patterns rather than looking at how a single text compared with the others.

It turned out  the Similarity Queries were the results that showed the difference between human-written text and machine generated text.

Similarity Query for Blog Post Zero. The top of the letters for “The Revolution Will Not Be Televised” can be barely seen at the bottom of the list.

Gil Scott Heron remained the outlier, as a control text should.

The ChatGPT 4o rewrite of any given post was listed as the closest text to the original, as one would expect.

What I did not expect was what came next. in order, what appeared was:

  • The non-control human texts.

  • The ChatGPT texts.

  • The Gemini texts.

The tool repeatedly marked the AI-generated text as different and like itself rather than being like a human.

Here, Lexos dispassionately quantifies what I experienced while reading those essays. The changes made by generative AI change the voice of the writer, supplanting that voice with its own.

This has serious implications for writers, editors, and those who teach them.

It also has implications for the companies that make these AI/LLM tools.

I will discuss some of that in tomorrow’s post.

Writing and AI’s Uncanny Valley, Part One

TLDR: A reader can tell when an AI rewrites your work and that shift in the text will give your readers pause in the same way generative images do, eroding their trust in what they are reading.

A more academic version of this research will be submitted for peer review but I wanted to talk about it here as well.

First, a disclaimer: I am hopeful for our AI-supported future. I recognize that it will come with major disruptions to our way of life and the knowledge industry will be impacted in the same way that manufacturing was disrupted by the introduction of robots in the 1980s.

The changes this will create won't be simple or straightforward.

Those considerations are for a different post and time.

Right now, I want us to look at where we are.

During the 2023-24 academic year, I began integrating generative AI into my classes. As I am an English professor, most of this focused on how to responsibly use large language models and similar technologies to better student writing and assist students with their research.

It was a process that moved in fits and starts, as you night imagine. As the models changed (sometimes for the better, sometimes for the worse) and advanced (as, for example, ChatGPT moved from 3.0 to 3.5 to 4o), I had to adjust what I was doing.

My biggest adjustment, however, came with the final papers and I unexpectedly found myself staring into an uncanny valley.

One of the things English professors are trained to do is hear the shifts in language as we read. The voice we hear — or at least listen for — is the unique voice of a writer. It's what makes it possible to read parts of “Ode to the West Wind” and “Ode Upon a Grecian Urn” and recognize which one is Shelley and which one is Keats.

We don't learn to do this as a parlor trick or to be ready for some kind of strange pub quiz. We learn to do this because shifts in tone and diction are used to carry meaning in the same way the tone something is spoken with carries meaning for a listener.

Listening for a writer's voice may be the single most important skill an editor develops, as it allows them to suggest changes that will improve an author's writing that will be invisible to a reader. It's what separates good editors from great ones.

For those grading papers, this skill is what lets us know when students have begun to plagiarize, whether accidentally or intentionally.

But this time, I heard a different kind of change — one that I quickly learned wasn't associated with plagiarism.

It had been created by the AI that I had told the students to use.

After grading the semester's papers, the question I was asking shifted from if a generative AI noticeably changed a writer's voice to how significantly generative AI changed a writer's voice.

Spoilers: The changes were noticeable and significant.

How I went about determining this is the subject of tomorrow's post.

The Let Loose Event and the iPad Mini

Along with all of the other iPad true believers, I watched the "Let Loose“ event today. Even before it started, I didn’t expect to be the main target for this release cycle. After all, my current iPad Mini and iPad Pro more than serve my needs. And of the two, the Mini is the device my wants and desires are focused on.

This isn't to say I didn't watch with interest. There was a lot of spoken and unspoken news released today -- some of which has me thinking that an iPad Mini “Pro” may not be as impossible as I once believed.

But first the rundown of what I think is rather than what I think may yet be.

The iPad Pro

For those wondering why Tim Cook billed today as the biggest announcement for iPad since its introduction, let me offer this observation: The iPad Pro is now a device that you know you need or you know you want in the same way you know you want or need a MacBook Pro. It has pulled away from the iPad Air in that it has targeted uses and users.

Does this mean non-artists (artists being broadly defined), non-"creatives", and non-gamers should stay away? Of course not. If it makes you happy and have the money to buy into “the ultimate iPad experience”, knock yourself out. Have fun and don't let the cynics get to you when they feel the urge to say you don’t need all that power and that the apps don’t take full advantage of the power that’s there.

Smile and enjoy the ride.

The iPad Air

The iPad Air has now become a reasonable laptop replacement for most users. Keep in mind that if you were watching “Let Loose”, you are not likely to be one of these users. You are probably a tech aficionado of some sort or someone whose livelihood is defined by the specs of the machine in front of you.

The Air is a solid general use machine that does most of what general users need while offering the flexibility offered by an Apple Pencil. And it's a lovely device for them. It's the easy choice of the iPad range.

The iPad

This is the device for those who are highly price sensitive and/or just want a full size tablet and aren't looking for a full laptop replacement — perhaps because they don't need something like a laptop. It’s a good option for what it is and a fantastic option for those who know that they don’t want it to be what it isn’t.

The iPad Mini

First things first: I believe that this is the device for those who are (like me) aficionados of the form factor or for those who need ultra-portability -- people like doctors and nurses making rounds in a hospital and the pilots who so often appear in Mini ads.

But I promised my thoughts for the future of the Mini.

Now, I am under no illusions that Apple is listening or actively considering my wants and needs. I also concede that my thoughts are being driven by what I want and not by what is practical or possible. After all, I don't have sales numbers so I don't know how much of an outlier I am in terms of desired use cases. I'm also not an engineer so can't know if my guesses are actually logical rather than just things that have the appearance of logic.

That disclaimer offered, one of Apple's points of pride was how thin the new Pros are. They are thin and more powerful. Somehow, the thermals issue was addressed with the M4 and the way the iPads Pro were engineered. The presentation nodded to some of the materials they employed.

That makes me think that thermal constraints might not be as great a restriction for a future Mini as I had thought (although distance to the battery could still be an issue). But I do wonder: If an M4 can be made to fit in a .21 inch (5.3 mm) 11 inch case, could one be fit into the .25 inch (6.3 mm) chassis of the current iPad Mini? And if not the M4, how capable would an A-series equivalent be?

What makes me hopeful that such things are being at least considered is that the iPad Mini is more Apple Pencil-centric than any other iPad. I can't help but think (want) an iPad Mini Pro or Air (or both) that supports the Pencil Pro.

Apple and the DOJ

I've resisted the urge to offer a hot take on the US Department of Justice's antitrust lawsuit against Apple. While I will admit to being puzzled by what little of it I have read and what I have read about, and while my years of teaching university-level writing has given me a good sense of good-rhetoric and good argumentation, I am well aware that legal writing exists in a very different world than I inhabit.

As I regularly tell my classes, good and evil, right and wrong, just and unjust, and legal and illegal are three different thing that do not always overlap.

What I am willing to state here is something that I find particularly irritating about this suit. it is something that is adjacent to it rather than a part of it. And I am well aware that those who are critical of Apple's practices (even those who like Apple) may feel that my irritation should be limited by what they perceive to be Apple's bad decisions and prior practices.

Nevertheless, what bothers me is that this lawsuit (unintentionally, I suspect) strikes at Apple's commitment to user privacy.

What made me think of this was when I pulled my Chevy Volt out of the garage this morning and chided myself for not having replaced the belt on my garage door opener.

Both General Motors and the Chamberlain Group have tense relationships with Apple and its CarPlay and Home services/frameworks. Neither like the way Apple has built a firewall between the data they can mine from their customers.

It is data that, as the New York Times has reported  (Follow this link for the initial story: https://www.nytimes.com/2024/03/11/technology/carmakers-driver-tracking-insurance.html) they can sell and that can impact people's lives in unexpected ways.

One of the reasons I pay more for Apple's goods and services is their commitment to protecting my privacy from those companies that would view my data as a profit center rather than something they have been entrusted with.

That is a choice the market offers. There are other options for those who either aren't as concerned about that exchange or (as unfair as it may be) who cannot afford that expense.

If I saw the Department of Justice making a similar push against data brokers and their partners and the practices they engage in for conspiracy or invasion of privacy, I would be less irritated. But without that, it feels like they are restricting Apple's ability to provide me with advantages that differentiates them from their competition.

My Response to an In-Store Demo of the Vision Pro

I expected to be impressed by the demo of Apple's Vision Pro when I went to the Apple Store. I was more impressed than I expected to be.

My reaction, in hindsight, parallels the subtle sense of awe and excitement the Apple Store employees who greeted me all had about the product. They knew what I was about to experience — an experience akin to the before and after response to climbing Kilimanjaro that Michael Crichton describes in his memoir Travels. It’s an experience (rather than a bit of knowledge) you can only understand and convey after the fact.

Apple has maximized its chances at giving users that experience by making the demo an experience. I don't say this to minimize what they have done to provide this — from the scheduled appointment to the greeters to the employees who lead you through the demo to the runners wearing rubber gloves who bring out the VisionPro that has been configured for you. Not only is it Apple’s job to make this a compelling experience (They have products to sell.) and not only does Apple have a reputation to maintain, Apple has a new computing concept to explain to a large public — much larger than the public they introduced the Mac to — that will only be able to "get it" when they experience it.

Make no mistake: Spatial computing is not a gimmick. It has promise and potential that is apparent in a version 1.0 demo that leans into the wonder and magic and pushes the potential for "productivity" and/or creativity into the background.

Let me offer one potential example that points towards this. And keep in mind what Apple's executives have been telling us. They have been working on this device and its attendant experience for some time.  Few would have guessed when they gave us Desk View, for example, that we were looking at a preview of the way the Vision Pro might track our hands.

The first example is the roll out of Freeform, which may now become a collaboration space with an even more infinite canvas. It was interesting when it came to the Apple ecosystem. Now, Freeform has a new dimension — one that will let users simultaneously interact with the whiteboard-equivalent in front of them and/or on the device (Mac, iPad, or iPhone) at hand while collaborating with others remotely. Unfortunately, as someone who does not have a Vision Pro, this is something beyond my ability to test. My guess, however, is that the ability to collaborate with yourself and others via iCloud will initially appear awkward to those of us who are not used to Wacom tablets. Nevertheless, it will permit a powerful level of collaboration.

I suspect the same will be true, albeit to a more limited extent, for Notes and the iWorks suite. Here, I am interested to see if the Notes collaboration feature for iPadOS 17 (as seen about 41 minutes into the WWDC keynote) will be part of Apple’s roadmap for this.

That leads me to two opposing truths — truths that will be challenging for organizations to reconcile.

  • Truth #1: I suspect this is a platform people should be experimenting with and those that do not run the risk of giving others a head start at understanding this emerging future.

  • Truth #2: I am not sure I could persuasively justify the cost to the powers that be via a normal purchasing process — especially since it is so attached to a single user rather than being the kind of device you could pass around.

I think it will take, if you can forgive the word play, a certain kind of vision on the part of leaders to understand the need for someone to try and wrap their head around this.