We Tried Photoshop’s New AI Tool on Art, With Mixed Results

Last Tuesday, May 23, Adobe debuted a brand-new Photoshop tool called “Generative Fill,” which allows users to directly modify their images through text prompts or extend the background of a visual. The tool, powered by the company’s new AI software called Firefly, is accessible during its beta testing through the desktop app. Online, some art history lovers are having a blast expanding their favorite artworks beyond the frame, seemingly without limits — while others are bemoaning the ways in which the tool can distort an artist’s original vision.

One key feature to note is that Adobe Firefly is trained only on the Adobe Stock library, rather than a dataset comprised of all sorts of scraped content available online, therefore apparently evading the prominent criticisms of art theft and copyright violations that tailgate other generative image software like Dall-E 2 and Stable Diffusion. But in the short time I spent playing around with the tool, I can safely say there are some kinks that need to be hammered out before our dear photo editor friends should panic.

Listen beautiful relax classics on our Youtube channel.

Thankfully, getting started with Generative Fill is pretty simple. Once you’ve downloaded the app, all you have to do is open the image you’d like to mess around with and start selecting the areas you’d like to work on. Getting your desired output is actually the hard part! For the first pass, I extended the canvas size in Photoshop to create more space around the image for the Generative Fill tool to complete. Inspired by this tweet containing an expanded field of Vincent van Gogh’s “Starry Night” (1889), I wanted to see if the software could “imagine” what lies beyond the four edges of several famous artworks.

Like Midjourney, Dall-E 2, and Stable Diffusion, Generative Fill can work on text-based prompts. However, if you just select the empty area of the enlarged canvas surrounding your original image and click the Generate button, the tool will just do its best to extend the original image based on the available information. You can see how it was more successful with an MC Escher lithograph compared to Thomas Eakins’s “Portrait of Dr. Samuel D Gross (The Gross Clinic)” (1875) — in the latter, a creepy Francis Bacon-esque figure appears in the bottom-left corner.

This software clearly struggles with portraiture and the body. On the other hand, the tool did a pretty reasonable job extending the background of Henri Rousseau’s “Tiger in a Tropical Storm” (1891), though I had to crop out Rousseau’s signature, as text tends to mess with the generative output of most AI software. The tool fared even better with Wayne Thiebaud’s “Flatland River” (1997).

An expanded composition of Henri Rousseau’s existing “Tiger in a Tropical Storm (Surprised!)” (1891), oil on canvas, 51 x 64 inches (image via Wikimedia Commons)

Taking a leaf from a Twitter thread that used the Adobe tool to reimagine popular album art, I had to toss in the cover art for Lorde’s “Melodrama” and see what would happen if I put the painted indie-pop icon in outer space. I extended the canvas size and selected the empty space around the album cover and typed in “outer space” in the text box and … it got a little cool and funky! Cool in that the generative portion made it look like a giant Lorde was snuggling into a planet, but funky in that Lorde’s body wasn’t really a body at all.

The Generative Fill tool supplies three different iterations in one pass, and the generative quality reminded me of the free-access AI image generation tool Craiyon (formerly known as Dall-E Mini) that became popular last summer for its absurd, warped productions and screwed-up body parts.

Lorde’s “Melodrama” album cover with an extended background using the text prompt “outer space”

After fussing with just expanding images beyond their compositions, I tried my hand at re-contextualizing the existing backgrounds in these artworks using the “Magic Wand” selection tool. I spent longer than I’m willing to admit wrestling with the selection tool to highlight the areas I wanted to alter, so my assessment of Generative Fill’s need for fine-tuning could very well just be a blatant learning curve on my part. I selected the background of Edward Hopper’s “Nighthawks” (1942) and remixed it with a “zombie apocalypse” text prompt and was actually rather pleased with the results. However, I did specify “Edward Hopper style painting of zombie apocalypse” in the text box and received something more along the lines of Yves Tanguy coupled with Simon Stålenhag instead … Not that I’m complaining!

Regardless, the tool produces mish-mashed images with a dreamy, uncanny feeling similar to what Craiyon was sending out into the ether about a year ago. Generative Fill does bring us closer to realizing what we thought was confined to our mind’s eye, but it definitely requires more alterations and clean-up on top of what it supplies to create a well-rounded image with indistinguishable manipulations.

In a weird way, Photoshop could end up inadvertently phasing itself out through this update that allows us to use our words instead of forcing us to memorize tools and settings or rewatch the same YouTube tutorial four times in a row. Nevertheless, I think we’d all prefer it if researchers spent more time training AI to properly transcribe what’s on our resume PDFs into the required text boxes of job applications instead of encroaching on and potentially absorbing an already vulnerable industry of exceptionally hard workers that already don’t get enough credit!

Source: Hyperallergic.com

No votes yet.
Please wait...
Loading...