Me: Is a photo taken with a phone a representation of what its camera saw or what its software believed to be there? 😼


@dos not quite a Moon shot with imagined detail, but yeah, proof of the latter.

@dos What you have to keep in mind is that the filters over a phones photo sensors aren't EXACTLY equivalent to the wavelength spectral response of those of your eyes cones combined with the response of your eyes rod cells (your rods actually have some color sensitivity that peaks at 498nm, between the blue and green receptors and what we think of as red actually peaks at 564nm which is really closer to reddish orange, a true deep red is more on the order of 680nm. So given that you're dealing with approximations to start, and then add to that most modern indoor lighting does not have a good spectral spread but instead pukes out light along a few narrow lines, when you take a picture at best it's a color approximation and best you can do is adjust to where it looks aesthetically best. Looking at the original photo it is way too rich in the green, and you've adjusted that down some but in my view not quite enough, or to put it another way you could use a little more red in the lower light levels and tinch more blue all around. The photo also looks a little soft and isn't using the full dynamic range. I'd stretch to the full dynamic range, i.e., so the very darkest pixel is absolute black and the very whitest absolute white and I'd tweak the high frequencies up just a tad so that the sharpness is better and the texture of his fur would be just a little more visible although you don't want to overdue it or you'll get fringes around the whiskers.
Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml