Once upon a time, photography was all about delayed gratification. You couldn’t see the results of your efforts until you had taken the film someplace to be developed and printed—a process that could take a day or more. And if you didn’t take many pictures, the roll of film might stay in your camera for months before you finished the roll and took it in for processing. If the photos were out of focus, too light, too dark, or poorly composed, you were out of luck.
Digital photography changed all that: You could see in an instant whether the photo you just took was any good. (This could have downsides, too, with interminable sessions of “Wait, Jane had her eyes closed and Bill wasn’t smiling. Let’s do one more…”) Gone were the film, chemicals (and subsequent hazardous waste), and waiting. Gone, too, was the habit of printing every single frame (sometimes twice – most film processors offered double prints for a small extra charge)—today, only a small percentage of photos taken are actually printed on paper.
The Next Phase: Computational Photography
Until recently, however, digital photography has had its own shortcomings, and not just because people would take pictures of their meals and post them on social media. Taking professional-quality photos with a digital camera requires expensive equipment, high-quality optics, and a fair amount of skill and know-how. Cell phone cameras, the most popular digital cameras in use today, have so far been responsible for consistently lousy pictures in terms of focus, color balance, and contrast, mainly as a result of the limits of the lens optics that can be crammed into a smartphone.
Now, we are entering a new phase of digital photography, one that can overcome the shortcomings of standard digital photography, even with smartphone cameras, using a set of techniques called computational photography.
The concept of computational photography is a bit squishy because it means different things to different people, but it mostly boils down to this: By taking advantage of the high speed, high resolution, high capacity, and processing power of modern camera devices, computational photography involves the application of advanced algorithms to produce effects not possible with standard digital photography or post-processing, such as:
- Extending a photograph’s dynamic range (that is, the level of detail that can be captured in both bright and dark areas of the image) by processing multiple, differently exposed images of the same scene
- Controlling subject illumination (direction, intensity, and color)
- Selective blurring and sharpening of the image
- Simulated depth of field
- Multi-image panoramic stitching
- Three-dimensional imaging
Only a couple of years ago, computational photography techniques were available only on specialized devices, such as the Light L16 multi-lens camera. Now, however, Apple has introduced them in its latest iPhone models, promising professional-grade results in one convenient package. The results, according to the many reviews of the new iPhones and their cameras, are startlingly better than those of standard digital photography.
The Google Pixel phone soon followed suit by adding computational photography features, and other camera and smartphone manufacturers will surely jump on the bandwagon as well, if they aren’t there already.
What It All Means
From the user’s standpoint, the process is pretty much the same: Point the device at the scene and click the capture button. The difference is that now the user can use simple controls later to adjust the focus, lighting, depth of field, and other aspects of the image. Behind the scenes, however, the change is more fundamental: Instead of capturing only the hue, saturation, and value of each color at each pixel, the device is capturing a much richer set of data about the scene that enables the user to easily “customize” it for presentation.
Another application for these techniques is in augmented reality (AR) and virtual reality (VR). Because VR and especially AR rely heavily on scene understanding and manipulation, computational photography techniques lend themselves quite well to enabling faster, richer, and more accurate and realistic AR and VR experiences. Look for new, computational-photography-powered AR and VR applications and content in the near future.
Meanwhile, users should enjoy the new possibilities afforded by these technologies in their own photographic efforts. These technologies might not make you the next Ansel Adams, and won’t keep Jane from closing her eyes at the wrong time, but they just might let us, with only the smartphones in our pockets, take photos that are actually worthy of sharing.