Why I Suspect there is an Absence of Photorealism in Apple's AI-Generated Images

I've read some commentary here and there about how the images Apple Intelligence generates are insufficiently photorealistic. One member of the MacStories' Discord (apologies for the lack of credit where credit’s due — there’s been a lot of discussion there) suggested that the image quality, in this regard, might improve in time as Apple’s models ingest more and develop further.

I observed there that I suspected the choices Apple made about the kinds of images generated by its Machine Learning might be intentional. With machine learning tools that remove people and objects in the background coming to Photos, Apple is showing its models are capable of photorealistic renders.

So why might they not permit Image Playground to do the same?

I can see where some of this may be limitations dictated by on-device computational power. While I’m no engineer,  I would guess fixing/repairing a of a photograph seems like it would be asking less of a Neural Engine than creating a full photo. Even so, the use of cloud computing would be an easy enough way to get around this.

Rather, I suspect it has everything to do with user responses to the generated images and, sadly, the motivations some users might have. Transforming a prompt into an image that is obviously not real is usually more than enough to meet most users' needs. The creation of a gazebo for a Keynote, to use Apple's example, is one where a visual concept is being communicated and that communication does not suffer from a lack of photorealism.

But there are cases where people can and will suffer from the malicious deployment of photorealistic renders. Indeed, in a rare case of high profile bipartisanship, The US Congress is (Finally!) moving to criminalize deepfake revenge porn. Senators Ted Cruze (R) is working with Amy Klobuchar (D), who both were candidates for their party's nomination for the presidency, find easy common ground here.

As well they should. it has been reported that middle schoolers in Florida and California (and, I suspect, elsewhere) — a demographic seldom used in profiles of good sense and sound decision-making —have learned they can use AI to generate photorealistic nudes of their classmates.

There's a reason we find The Lord of the Flies plausible — even if the historical event turned out for better than fiction.

It's the kind of problem that even a n AI enthusiastic techbro focused on making the Star Trek computer or their own personal Jarvis should have seen coming because it always seems to happen.

It was obvious enough to come up in a meeting at Apple.

And sidestepping photorealism is an obvious solution.

Keeping their Image Playground in check in this regard makes good sense and is, I would argue, protecting its users from those who might use the its technology for malicious purposes.