Photoshop's New 'Select Subject' Feature and How It Works

Dec 13, 2017 9:05:00 AM

Even before there were digital images and image editing software, graphic artists had an occasional need to extract an arbitrary shape (such as the outline of a human subject) from an image so that it could be placed in other images. In the days of chemical photography, this involved complex darkroom techniques or even painstakingly cutting things out of paper prints with scissors. Imagine fitting that into your schedule these days!

 Digital image editing software, even advanced products such as Adobe’s Photoshop, were not much better—the scissors and tape were replaced by carefully selecting the shape outline by pointing and clicking. Some attempts to automate this process, such as edge detection algorithms, work well as long as the desired shape has reasonably sharp edges and is easily distinguished from the background. They run into trouble, however, with busy backgrounds and indistinct outlines, such as foliage, animal fur, or human hair.

Machine Learning to the Rescue

The advent of machine-learning techniques is changing all that. In an upcoming version of Photoshop, Adobe will introduce a feature called “Select Subject.” With one click, the software determines which parts of an image are subjects or foreground images, and which parts are background. You can then select, copy, paste, move, morph, or do anything else to each subject. Cluttered backgrounds, fuzzy sweaters, and windblown hair present no problem, and a subject appears without the gaps, unwanted background, and other artifacts that make it obvious it was “Photoshopped.”

All of this magic is brought to you, courtesy of machine learning and decades of vented frustration on Reddit design forums.

How Does It Work?

For obvious reasons, Adobe is not releasing details about the machine learning technology (which it calls Adobe Sensei) that it’s using to implement the “Select Subject” feature. But we know enough about machine learning in general to make some educated guesses. It’s likely there’s an artificial neural network (ANN) involved, possibly a deep-learning ANN with any of several possible configurations. The details of the configuration—the number of layers, the number of nodes in each layer, the pathways between the nodes, and the mathematical relationships that govern each node’s response to incoming signals—are the proprietary “special sauce” that makes it all work.

It is also likely that Adobe had to train their Sensei technology with thousands upon thousands of images, most or all of which had to be carefully annotated by humans to show the system what constituted foreground versus background on each image. This was probably the most labor-intensive piece for Adobe. With appropriately tagged training content, it was likely a straightforward task to have the system run through enough training cycles and self-tweaking to become a solid, reliable feature for Photoshop.

Next: Video

We’ve all seen TV weather announcers standing in front of satellite images or graphical five-day forecasts. But in the studio, the announcer isn’t really standing in front of the satellite map—he or she is standing in front of a blank background of a specific color, usually a shade of green. The video production system replaces the green with the satellite image or other graphic so that it appears the announcer is in front of it. This technology has been in use for many years, even before the advent of digital video.

But what if you wanted to do the same for a subject who isn’t standing in front of a green studio background? Perhaps someone performing in front of a brick wall, walking on a beach, or standing in a crowd of people? Current video technology has a hard time with this, and the results are less than ideal, especially for live broadcasts.

Machine learning can help us here, too. The same general approach that Adobe is using to identify a subject in a still image could be used in the video realm as well, and some researchers are currently pursuing just that, with promising results. There’s nothing to report just yet, but stay tuned for news on that as it develops.

Adobe’s Sensei is just one example of the practical uses to which machine learning is being applied, right now. It’s what distinguishes the current state of the art in artificial intelligence from that of the past—it has emerged from the laboratory and found its way into real, useful products. There’s a bright future ahead for machine learning—and for those who take advantage of it.

Abdul Dremali

Written by Abdul Dremali

Abdul Dremali is a key content author at AndPlus and a driving force in AndPlus marketing. He was also instrumental in creating the AndPlus Innovation Lab which paved the way for the company’s leadership in Artificial Intelligence, Machine Learning, and Augmented Reality application development.

Get in touch