WWYD and Artificial Intelligence

May 28th, 2023


I have entered this photograph in the WWYD-200 Challenge. In the interests of openness and as I think Artificial Intelligence in this forum's challenges merits discussion, I advise that about 66% of the editing has been done utilising the AI Image Generator in Photoshop Beta.

Since Photoshop and Lightroom are probably the industry standard for photo editing, I would be interested in members' views about the use of this or similar AI programmes in this challenge.

May 28th, 2023
I think it kind of defeats the purpose for me personally. People get into photography and photo editing for all sorts of reasons. I do it because it's a hobby. I'm sure the latest phone camera could take a way better picture than I could with my old Nikon, but then it's the phone being the photographer, and not me. Likewise, AI Image technology could probably edit a picture better than I could, but then I wouldn't be doing it. Why bother?
May 28th, 2023
It’s a really intriguing question. It seems if I use Expand in Snapseed that is AI and if I press Auto in LR that is also AI. Or even if I use my iPhone to capture a photo it is doing all kinds of correcting in capturing the image unless I use RAW. So I’m not sure where the limits will be drawn.
May 29th, 2023
as long as it is disclosed the host of the round can decide , but as the others have said our cameras and software are already low-key using it!
May 29th, 2023
At first, the subject of Artificial Intelligence intrigued me. But now that it has become a reality, like anything man achieves there is some good that comes from it and there is always some not so good.

Being old school and starting out in photography developing my own b&w film and prints while trying to mimic the work of Ansel Adams, I don't like AI. Adams spent hours working on prints burning and dodging in a darkroom. Now we sit at a PC with a mouse clicking on presets and sliders to get the amazing results.
Sometimes when I see amazing photographs whether b&w or color I wonder, because of things I have been able to do myself, what did the photographer really see?
I came across a video on YouTube by Tony and Chelsea Northrup on AI and according to them many professional photographers are utilizing AI to create unbelievable results on their photographs.

Again I have to asked myself, it this what they really saw or it this what they had hoped to have seen?
May 29th, 2023
@stephomy @skipt07 I align most with what these two had to say. My photo post today is of a card pattern I cut out and glued together; it's very cute...but It is not mine, artistically speaking. I see the possible good in AI, but I think, for artists who wish to be true to themselves, it removes the slow, steady growth that learning, knowledge and wisdom give us. For myself, I'm not renewing Topaz or ON1...in years past they've positioned their product as making our work look "so much better" and that's where I'll stop...while I still feel as though it's my own creativity.
May 29th, 2023
@Weezilou The card patterns may not be yours but this is a photography site and the photo is yours 🙂
May 29th, 2023
I’m distressed and troubled about the onslaught of AI - even Photoshop’s gone and caught the AI virus - what photo can one trust to be real and not AI-engineered? One hopes honesty and integrity will prevail, and that it will be viewed as ‘promptography’ only.

At the moment, depending on the digital manipulation skill of the human prompter and of the prompts themselves, it’s still pretty easy to spot what has been AI-generated or had AI input; the image either looks and is too perfect, or the colours look off, the shadows and light direction look off and flat, and many features are weird looking, but AI is evolving and improving at an impressive rate.

I think it’s great for creating super-surrealistic effects, especially in cinematography - but as a general editing tool for a photographer, I think it’s way over the top and takes away the thrill of learning and discovering new techniques and new places. Why put a photo of myself on top of a certain mountain when I know I’ve never been there? What’s the point? Maybe it’ll work if used wisely and discreetly and with candid and ethical intent. My belief is to get it right in camera, which does enough processing as it is anyway - and of course we should continue to do post-processing like crop, colour correction etc, particularly if one shots RAW - but not to the point where you input a prompt something like inputting a yeti or a rose bush or 15 pretty birds on a tree on a photo you shot of your climb up Mt Everest.

I will not be downloading the beta version of Photoshop or Firefly or whatever they call it; I’ve always stayed clear of Topaz and Gigapixel and the like. I can see when these products have been used, and looking at some of the EXIF data mostly confirms what I thought. Lightroom offers a denoise feature/other features which is often more than enough, and if I still can’t get the original image looking less noisy or more vibrant or whatever, then I’ll try again next time with different camera settings at a different time of day.

Graphic design and composite work is another kettle of fish entirely, and for that I’ll use Photoshop or some of the smaller programs. Not AI. I want my design to be mine, from my own imagination, not some robotic computer’s version.

Just my two cents worth. I’m pretty much with Stephomy @stephomy and Skip @skipt07 on this one.
May 29th, 2023
It’s a complicated question especially as the use of advanced photo editing programs is on a sort of continuum. If you take a picture of a scene with one unattractive feature and crop to edit it out, is that okay? How about if you take the same picture but instead of cropping you use cloning to erase that feature? Or to be more efficient and remove an item more effectively use content-aware fill to replace it? How big a step is it from there to use some form of AI to replace it and insert another item entirely?
I don’t have the answers to this but I do think that once technology is out of the bag it’s hard to put it back in. I wonder if back in the 1600’s or 1700’s traditional artists turned up their noses at artists who used commercially produced oil paints instead of making their own. “Those aren’t even their own colours, they are someone else’s creation they’ve bought in a tube!”

As others have mentioned, just about all modern cameras use some kind of in camera automated editing before spitting out the files. And the amount of in camera editing seems to be increasing with every new release and in addition is ‘hidden’ editing in that generally the user doesn’t know what has been done to the image. I had a friend who told me he was bothered because the pictures he took with his new i-phone didn’t seem to be right. They all made his family etc. look better than they actually do in real life but he didn’t know why or how.

I think that, for me, it all depends the reasons for taking and editing the picture and on honesty and not trying to present an altered picture as a truthful representation of something or somewhere as it actually was.

As far as the challenges here go I think the challenge description should indicate what is expected. In WWYD for instance I think pretty much anything in the way of editing goes, because that is what the challenge basically says. If you host a challenge then it’s up to you to give guidelines as to what you expect as a limit to editing freedom.
Even in cases where anything goes, if as some have stated, it is still easy to spot AI generated images which are flat with off colours, etc, then presumably they won’t win challenges anyway.

As for me 90% of my images are meant to be representations of what I imagined a scene could be, or should be, or I would like it to be, not an accurate representation of what was actually present at the exact moment I clicked the shutter. If I were in the business of producing images to be used for the purpose of identification or that claimed to be a representation of an object for sale etc. I’d have different criteria.

Sorry for blathering on for so long. It is an interesting question and one that I don’t think we will see answered at any time soon.
May 29th, 2023
Rather than repeat some of the well-written, thoughtful comments from above (I especially agree with Karen's @cocokinetic views) I'll just say what I personally feel. There seems to be a certain point where AI crosses over from traditional post-processing. While the camera does do its own alterations the moment you click the shutter, a photographer still has the option to play around with it by applying effects, cropping, color adjustment etc. But AI (at least from my experience with it) overrides your decision and decides what it thinks is best for the picture- inserting, overlaying or even altering the shot entirely. Most of the time it looks unreal to me- even compared to the more fantasy like results one can get from their own playing around in a photo program like photoshop or lightroom. The picture above (in my humble and unsolicited opinion) looks like a poor composite to me. I can clearly see where one picture has been inserted over the other and they don't seem to match as if it was one shot. If I was making a composite in my photo program- of my own making- I think I'd be able to blend it together much better than this. And I wonder if the second picture is actually from the photographer who's using AI. Did the computer put the model and garden in there, or was it the photographer? Anyway- bottom line on this point is: I want the control on how a picture should be processed. I don't want the computer to decide. And that's where I think it lands with WWYD. The challenge is not "What Would Your Computer Do?" It's "what would YOU do?"
May 29th, 2023
Interesting comments all. AI certainly seems to generate a lot of disdain. I think many people are not really understanding exactly what AI is and is not. For example, when a filter or preset is applied in a program such as Lightroom or a phone app such as Snapseed, this is not AI. It is merely a predetermined adjustment which, in the case of Lightroom, can be altered or modified by the user. Noise reduction or sharpening programs such as Topaz DeNoise AI and Sharpen AI do use machine learning in their algorithms but again, the results can be modified by the user. In my case I apply Topaz to a layer in Photoshop and mask it in to my image so that only specific areas are affected. Topaz products are an essential part of my workflow and I wouldn’t be without them. The recent release by Adobe of Generative AI seems to be what has everyone setting their hair on fire. I downloaded PS beta and fooled around with it a little bit. It is quite fun to play with but the pictures it creates are not photography. In the description box you type in a command such as “dog” and it will create a dog for you to add to your picture. The results are hit and miss (mostly miss) but really quite amazing when you think about it, however they are nothing you would ever seriously use in a proper image. But that is only one part of the equation. The technology is amazing and used properly there is definitely a place for it in serious image making. For instance, it will make removing unwanted elements from a photo incredibly fast and easy. To remove, say, a garbage can, just select it, leave the description box empty and click “Generate” and the program will remove it. What about an ugly sign that you just couldn’t keep out of your composition? Gone. Content aware fill already did this but Generative AI is faster, easier and can remove complex items more effectively. Painters do this all the time by just leaving out the stuff they don’t want. What it comes down to is this: Generative AI, or any other AI program, is just another tool in the tool box. You can use it to create fake images that everyone will spot as fake right away, or you can use it judiciously to make your workflow faster, easier and to enhance the images you work hard to create. I for one will embrace it.
May 30th, 2023
@gardencat Very well reasoned Joanne, and I agree with you. The starting image from our camera is simply that - a start. Software brings it to life. As Ansel Adams said: “the negative is the score, the print is the performance.” Editing is a very personal thing and everyone has their own ideas about what constitutes too much. AI in particular seems to upset many but I feel that anything that helps me achieve the look I’m going for in my images is going to find its way into my workflow, AI or not.
May 30th, 2023
To me, it sounds like the use of "AI" is a marketing tool to intrigue the masses, or at least the non-photographers.

I think the nomenclature is bad. Perhaps one should think of it as "Algorithm Intelligence." Some programmer found a way to build a better mouse trap. In the day, if I wanted to stack night photos for enhancing the sky, I remember having to pick marker stars in the first photo, then mark the same stars in the same order in all the rest of the photos. Now all I need to do is mask the foreground, if wanted and push-de-button. And in the case of star trails, do you want to hide the gaps, or have the program fill them in?

As an aside ( @cdcook48 ) , "Painters ... just [leave] out the stuff they don’t want." I think it was a national Geographic program that film over the shoulder of an elderly Japanese painter of a temple in a heavily build area with wires, poles, cars, et al, but the painting was of a beautiful building.

Anyway, the algorithms keep getting better, but "intelligence", I don't think so.
May 30th, 2023
@byrdlip Yeah. I tend to call it "Simulated Intelligence". It can only do what the code tells it to do.
May 30th, 2023
There was an interesting remark from a commentator on a photography FB forum whereby he/she felt that AI is ‘creating’ an image instead of ‘capturing’ one. That resonates pretty well with me.

As far as editing goes, if it’s to improve on a camera-captured image, particularly if it’s been captured in RAW, then that’s fine - RAW photos have to be edited after all, for colour correction, tone correction, and for smaller stuff like remove power lines, lens marks, straighten the thing, remove red-eye, enhance the sunset, whiten the sand, a bit of masking here and there, layer blending, convert to BW etc etc. Even JPEGs are greatly improved with the proper application of these touches and edits.

I’ve used Touch Retouch for ages to remove power lines and spots and a few small photo-bombs like a plastic bag and so forth - and it works like a dream.

But when the photo edit is really untrue to the original then I tune out. Purposeful composite imaging has more leeway, I feel, but again…. how will a person ever learn how to edit properly if we just ‘generative fill’ or insert stuff with the press of a button.

And then copyright issues are another kettle of fish altogether. I’ve tried AI editing with some other smaller programs, and in some of the offerings, I can clearly see the initials of the original artist in the final rendering! I mean….did this artist give permission for her work to be used in AI rendering? Of course I can easily edit the initials out, but that is completely wrong and downright dishonest. I'd be aghast if some of my stuff was used like this without my say in the matter.

I’ll be staying well clear of this technology - or until there are clearer and more ethical boundaries set. It's all still too muddy and messy, and life is muddy and messy enough as it is.
May 31st, 2023
i think it's about honesty and intent... i, personally, don't want the machine making decisions for me... i do, however, want it to help me do what *I* want to do... and i don't like lies... so if i put myself on a mountaintop i'd make it perfectly clear that it is a composite...

but in response to some of the comments above, i WILL use technology to help me produce the image that's in my mind's eye, rather than an image of the thing that was there...

that said, i have not tried AI and am not really inclined to try it as my sense is that i would lose control of the end result and it would no longer be mine...
May 31st, 2023
@cocokinetic To be clear, I would never use the software to "insert stuff with the press of a button." I spend a great deal of time carefully post processing my images and I take pride in producing a final image that represents my vision at the time of exposure. There are times, however, when, for reasons beyond my control, unwanted elements are present in my composition. In the past I used the clone stamp tool to remove these. A laborious but completely AI free method. When Photoshop first introduced AI technology with content aware fill it made the job much easier although I found the clone stamp was often still needed for touch up. (Touch Retouch is also AI powered content aware fill). Generative AI is essentially content aware fill on steroids. Rather than just looking at the surrounding pixels it looks at the entire photo and does a much better job of removing the object and replacing it with pixels that match the existing image. Copyright issues don't even enter into it. The photo is entirely mine and the result is true to my vision. I'm actually rather annoyed that Adobe also made Generative AI capable of creating fake pictures because that has nothing to do with photography. They should have left that to the SnapChats of the world. In the future, if you happen to see one of my posts, please be assured that the work is my own. I may or may not have used Generative AI on the image, but if I did it was only in much the same way that you would use Touch Retouch.
May 31st, 2023
@cdcook48
I hear you, Chris. Thanks for responding. To be frank, I find the whole issue confusing and controversial, but generally speaking, at the moment I'm not comfortable with the concept. There needs to be vastly more clearer boundaries and solid guidelines surrounding the AI subject, I feel.

I'm sure most photographers will be honourable and true to themselves, but still, as Scott Kelby himself even says, he may start doubting whether that gondola was really there, if John Doe really did capture the photo as he, Scott, sees it on his screen - and so forth. So if he's feeling and voicing that doubt, it's obvious that many photographers and viewers are going to be wondering the same.... just how much is real, and how much is AI.

I was reading some article somewhere whereby adobe claims that all outgoing imagery, which stems from their stock supply of photographs - and INCOMING material is theirs to use as they please. So if I add say one of their beautiful yachts in a stunning sunset to my boring bland photo ...they then keep and retain the rights to do what they want to/with my own original boring bland photo which they changed with their AI sunset. If this is true, I don't understand how they can do this. I'm also not sure if this applies only to the beta phase, or what they will do the once the full version is released.

So I don't know where this is all going, but I guess time will tell. In the meantime I'll step very cautiously until I understand the whole development better. I realise it's progress, it's not going to go away, and that we are going to have to adapt. Being a complete Luddite in this story is definitely not the way to approach the subject.

Again, I thank you for your reply and your input. Much appreciated.
Write a Reply
Sign up for a free account or Sign in to post a comment.