Zack Apiratitham

On Post-processing a Photograph

Francesco Carucci on PetaPixel:

The RAW image, which comes from reading the Bayer Filter Mosaic, can not be visualized without a transformation to create an RGB image that can be displayed on a screen or printed on paper.

The interpretation of the raw file to reconstruct colors from the Bayer Filter Mosaic (what is often referred to as “Color Science”) and produce the final image applies a number of subjective transformations and selectively throws away information. The subjective interpretation must happen somewhere between capturing an image and displaying it. Someone has to take the subjective decisions about what information to throw away, what information to keep and how to transform the information to be able to visualize it.


When you read “no filter” or “straight out of camera”, what you are really reading is “I’m leaving the post-processing choices to the engineers who designed the camera”.

Very well-put here by Carucci. I am a firm believer that a large portion, if not in some cases the majority, of the time it takes into creating a photograph is in the post-processing1. But in this day and age of digital photography, where any image can be conjured up in Photoshop, the words "post-processing" and "editing" have gotten the connotation that the producer of the work is not being truthful. It sometimes feels almost as if those who are so in-your-face about their "no filter" photos think they have some sort of moral high ground for not touching up their photos.

Now with great cameras being so ubiquitous thanks to smartphones, these "no filter" images are in fact so incredibly post-processed that there are probably way more adjustments done to these smartphone photos than how much an average photographer does to process their RAW images.

From Apple's press release for iPhone 11 back in 2019:

Next-generation Smart HDR uses advanced machine learning to capture more natural-looking images with beautiful highlight and shadow detail on the subject and in the background. Deep Fusion, coming later this fall, is a new image processing system enabled by the Neural Engine of A13 Bionic. Deep Fusion uses advanced machine learning to do pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo.

So yeah, every image everyone takes nowadays is very much so post-processed.

  1. Of course, with the exception here being photojournalism. ↩︎

profile picture

Hey, I’m Zack. Thanks for reading!

I'm a software developer originally from Krabi, Thailand currently living and working in the suburbs of Boulder, Colorado, USA. This blog is a place for me write about my interests and things I find worth sharing.

Feel free to say hi via email or on Mastodon, and subscribe to the RSS feed for more!

© 2012-2023 Zack Apiratitham