When Not To Use AI

The other day LinkedIn served me up a post from a photo retoucher who made the perfectly valid point that while AI can’t fix an inherently bad photo, it can be used to rescue one from disaster. I’m kicking myself for not saving the post, because while it made some interesting points, it also reminded me that context and purpose matter as much as the content of the image.

Let me explain that a bit (ok, a lot) more.

Macaroongate

Their example was a photo sent to them by a client. It was a food photo of a macaroon on a bright pink background. There was a spatula sticking out from under the macaroon which needed to be removed, the depth of field was wrong (more of the product needed to be in focus) and the background needed to be plain white.

The retoucher said they’d used AI extensively to correct the flaws and created a commercially usable image. All good then, except the casual reader might have been left with the impression that it’s ok to do this kind of twiddling on any photo, regardless of the content or the purpose of the mage. However, here is where I would advise caution.

Content and Context

When considering where, how and why a photo is to be published, context becomes a critical consideration.

Photos which are intended to represent reality mustn’t be altered. It doesn’t matter if it’s for a newspaper or just a tweet, if the context is to illustrate a PR event or news story, alteration of the image beyond what the camera saw at the moment of capture is wrong. In the case of newspapers (and their associated websites and social media channels), image manipulation beyond certain specified basics are considered a breach of the Editors’ Code of Practice.

Even in the Wild West Frontier of social media, brand credibility can be trashed if images are manipulated. Adding logos to clothing or signage, moving or removing irritating background items or changing colours (amongst many other dodgy options) should all be considered no-nos when the purpose of the photography is to illustrate an event.

All of which brings me to a recent failing of my own.

Kicking Myself (for the second time in this article)

In the group photo below taken for University of Bath, what irritates me the most is the fan lurking at the back of the stage. I’d already shifted it as much as I could before the event kicked off so it didn’t show up behind speakers at the lectern. However when I had only a few seconds to get the group shot at the end (I needed to be quick, or risk making Sir Christ Whitty miss his train), I failed to notice it was now visible again.

The simplest fix would have been to bring the pop-up banner (at left) forward. This would have hidden the fan and the table with the water glass, balanced the group and made the branding more prominent. One small action would have tidied the entire picture!

Thinking back I was rather preoccupied with organising six people into a tidy group under time pressure, simultaneously fretting about whether the poor stage lighting was going to give me a clean image, but it’s easy to make excuses after the event.

You might argue that since the photo was staged and therefore not ‘reality’, I could have used an AI service to move the banner and fix all the problems I’ve listed, but the thing is even a staged photo at a real event contains its own kind of reality.

What Is Reality Anyway?!

We could argue about the truth of any photograph, but while the viewer here would understand, without needing to be told, that this is a staged group photo, using software to tidy the scene after the fact would be deceptive.

Of course this isn’t a hard news photo, but it is a record of an event which took place and destined to be used to ‘report’ on that event. Therefore, manipulation would not have been a good idea.

Maybe I should start using sloppy background errors as a way of ensuring nobody thinks my work is manipulated, a sort of signature of authenticity if you like. No, I think I’ll just remind myself to always check the background first (one of my earliest lessons as a local news photographer).

When setting up a picture like this group, it would be acceptable to move elements and arrange people for the optimum photo before it’s taken; doing so in post-production harms our trust in what we see in media announcements.

What About Headshots?

It’s a little different when I’m doing corporate headshots or images for corporate websites and brochures where there is no pretence at representing a news story or event. The images on a business website are generally there to promote or sell a service. They effectively become advertising, where manipulation is fair(er) game.

For corporate portraits I have a policy of cleaning up temporary blemishes and removing stray hairs, but the circumstances, context and purpose of such photos is very different. I’m not trying to say, “This is exactly what Sheila Jones looked like on this particular day.” The client (or Sheila) wants to give a representation of themselves as a real person who’s friendly, professional and approachable. As long as the image isn’t altered beyond recognition, some retouching is perfectly acceptable.

On occasions where an image isn’t destined for publication (perhaps it’s just a keepsake for the participants) it’s also acceptable to apply heavier editing. The problem here can be that once an image is “out in the wild,” it’s also harder to control where it might end up.

Which Leaves Me Where?

Back to my own example, of course there are things I could have tidied up, but having made the picture I made, I accept it for what it is; a quick group photo, a record of a moment, where no one but me (and now anyone reading this article) will even notice the shortcomings of the result.

I don’t have to be fine with that, but neither will I beat myself up over it. I can be comfortable with the knowledge that I haven’t used AI to hide my mistake.

Just to say, the evening itself was fascinating and I highly recommend watching Sir Chris Whitty’s lecture via this link.

End of an Era?

“Perhaps I’m joining dots which aren’t there, but with the passing of Elliott Erwitt, I’ve found myself pondering the state of the photographic industry and whether it’s truly entering a new era.

We talk about eras as if there’s some sudden cut-off point between a time when everything is one way and then suddenly it’s all changed. That new era then chugs along solidly until there’s another great upheaval.

Era Today, Gone Tomorrow

Of course, this is nonsense. It doesn’t matter how sudden a change is, there is always a transition period. And that speed of transition will happen more quickly for some, while others will barely notice it happening in their lifetimes. It also comes down to the nature of the era under scrutiny; in the transition from the Bronze Age to the Iron Age, the use of bronze didn’t vanish. Likewise, though obviously on a smaller scale, the same goes for the transition of film photography to digital, or black and white to colour.

Back to why Erwitt’s passing got me thinking about this then. Well, it wasn’t just that. Nor was it the passing of Larry Fink, but it’s fair to say we’re well into an era when great photographers of the 21st Century succumb to the inevitability of chronology, and that in itself is enough to signal a shifting paradigm.

That AI Thing

The passing of ‘the old guard’ comes as AI-generated images have started to make an impact on the world of photography. That’s why this feels to me like a moment of deeper change.

Recently, World Press Photo tried to allow AI image generation in one of its categories. How anyone in their right mind thought AI should have any place whatsoever in a World press photography prize is beyond comprehension. They have now withdrawn the permission to use AI or Generative Fill, but that was after some stiff criticism from photographers.

My concerns around the widespread use of AI in image creation are currently threefold:

1 The data training required for machine learning is a mass copyright infringement almost impossible for creators to track and prosecute. They’ll certainly be last in line to benefit from it financially.

2 Trust in genuine imagery will collapse, leaving us even more exposed to false narratives by toxic groups and regimes.

3 The public will become increasingly ‘anti-photographer’ if they become fearful that, whether with the photographer’s permission or not, the images can be scraped and used to generate images of a damaging or downright nasty nature. We’re already seeing a massive rise in AI-generated child abuse imagery and unless it’s addressed head-on, it’ll only get worse. In return, photographers will find it increasingly difficult or even impossible to document news or simply everyday life if they can’t include people.

A Visual Desert

One way or another, left un-addressed, each of those three concerns will eventually lead to a collapse in our visual culture. All that will be left will be kittens, sunsets and pretty landscapes, and none of those will be real either. The visual white noise of the internet will finally blot out anything of worth.

We can’t live in the past, yet all too many photographers, myself included, yearn for some kind of good old days. A time when photographers, like Elliott Erwitt, Diane Arbus and many besides, could document even the simplest human activities without feeling as though we were committing some kind of crime. A time when pictures mattered more and had greater value, both culturally and in hard currency terms.

Here is my meagre hope; that while AI won’t go away, it will at least settle down into its own genre, an art form in its own right and a play thing for people with too much time on their hands. I hope also that, like the resurgence of vinyl and analogue photography, non-AI-tainted photography might see an increased appreciation. It might even lead to improved values for professional photographers’ work. Miracles may happen.

AI to Restrain AI

Manufacturers are starting to integrate Content Credentials technology into cameras so images can be verified as having been altered (or not), meaning media outlets (and thereby the public) will know that what they’re seeing is authentic. With luck this will make it far easier to separate true from false, but it’s just the start. We need to reach a point where AI imagery can exist without it casting doubt on the veracity of news images.

The Image above was generated through deepai.org using this headline from The Guardian, “Sellafield nuclear site has leak that could pose risk to public”. It would be tempting (but on the whole, wrong) for media outlets to use AI-generated images to illustrate their stories. To be clear, The Guardian did not use this image to illustrate its story.

The Next 40+ Years

Whatever era we’re leaving behind, whatever we’re moving into, change will be both fast and slow depending on your perspective. Whatever happens, we’ll look back on this decade, at the photographers who have passed (and who will yet do so) and we’ll be tempted to draw an arbitrary line and say this was the end of an era.

The truth is, the current era started almost 20 years ago, and it will easily take another 20 years to stop starting by which time it’ll be about ready to start stopping. By which time I’ll be 107 years old (or more likely dead). Either way, it’s highly likely I’ll have stopped caring.

 

AI AI, What’s All This Then?

Unless you’re living under a rock, you’ll be aware of a great deal of chatter about AI (Artificial Intelligence) and its increasing influence on all our lives. I suspect some of that chatter is AI-generated, but how would we know?

Of course in this article I’ll be contemplating AI’s impact on the future of photography. I think it’s going to be interesting.

The AI Roadmap

I say interesting because you can see that AI is at the foothills of its potential (good or bad). Give it a few years and you’ll see it progress beyond what we can imagine right now.

Also bear in mind that different areas of photography will be affected in different ways and at a different pace.

Right now, AI images of people are creepy, weird and downright unnerving (see examples below generated using DeepAI.org).

Inanimate objects are generally better, but are they convincing? I’d say this is where progress will advance most rapidly. For now, many product shots are rendered using computer graphics anyway, so AI will probably simply change how those renders are generated. Product photographers will still find themselves in demand for the more bespoke shoots.

Some areas could see no impact at all. Do you want AI-generated family photos? How about a wedding? What would be the point?

We’ve Been Here Before

Thinking about images for business, I see AI as having parallels with micro-payment stock photography of a decade or so ago; businesses embraced it as an easy way to fill the gaps between words on their websites, but many have reverted to commissioned work as it’s more convincing.

There is currently a cost barrier to AI. It’s more expensive and time-consuming to get usable visual AI content for marketing purposes than it is to commission original work. However, even if the cost and quality of AI become non-issues, there’s the question of the human factor.

Microstock flourished while it was novel and before businesses realised they needed to connect with clients and consumers on a human level. They discovered their audiences weren’t engaging with the over-polished models and unrealistic scenarios of the stock world. Where we are now is that stock images supplement pictures of ‘real’ people, but they can’t replace them. The same will stand for AI.

In fact commissioned work (in my personal experience) has grown over the past decade. It will continue to grow as businesses use more video, which stock (and AI) imagery won’t be able to compete with for a very long time (if ever).

What stock could never replace, AI won’t be able to either. If anything, AI will replace stock imagery and we’re starting to see that happen.

Stocks and ShAIrs (ouch)

Shutterstock, the bete noir of photography and the murderer of the viable stock image industry, have seen the future. And the future is bleak.

They now have their own AI image-generating portal, which I suspect not only undercuts their contributing photographers, but might also be using the existing library of 400 million+ images (supplied by those same contributors) to feed the neural engine which generates the AI images. It’ll be interesting to see how Shutterstock plans on ‘rewarding’ contributing photographers when their images are reduced to AI fodder. An AI-generated image will contain data from hundreds (maybe thousands) of images from the library, so who gets paid for that data. Will photographers know which pixel was theirs?

Am AI Safe From All This?

Notwithstanding my tortured AI-themed puns, I can see how AI might impact certain areas of my work, but since I mostly concentrate on photographing real people, and since this is what businesses need, it’s hard to see how AI can impact that.

And AI currently works best when used to generate static content. Video would require an unimaginably high level of computing power (read ‘cost’) which doesn’t yet exist. I say yet, but processors based on quantum physics are emerging in laboratories and could be in our devices soon enough.

Ultimately I don’t think it matters what AI does, because one thing it can never replicate is reality. It will have its uses, but for my typical client there is nothing that can beat the human touch. I am going to confidently say, there never will be.

One More Thought

This is perhaps the most troubling thought too. AI has already been used to generate ‘deepfake’ news images and video. We can’t stop this, but news outlets will need new tools and rules to spot and stop this. That is where the real danger of AI lies. The last two words of that sentence are the perfect note to end on.