Skip to main content

Dynamic Range

The Online Photographer has posted an excellent article, More on Dynamic Range, on the range of brightness in a scene, how it is captured as a photographic image, how to fit that range into the range of lightness levels recorded by the camera and express that range in the rendered medium, whether a JPEG image viewed on a monitor or a paper print.

He's right that dynamic range is the most abused, misused and poorly understood term in digital photography. It's the only short hand we have for "range of brightness values" or "range of tonal values," which are both going to give your fingers cramps if you write them often enough.

There is a lack of understanding by many photographers about the basic process of recording an image and producing a visible print from it. There are crucial, but precise, distinctions to be made, which took a long time and much expertise to establish in analog photography, so the confusion is not surprising.

The first thing to consider is the range of brightness in the scene (which the Online Photographer article demonstrates and discusses). It may seem obvious to some, but is often counter-intuitive, that a distinction exists between the range of measurable brightness values in the scene (and remember, most are reflected light, but some is direct from light sources or specular reflections, for purposes of exposure, it is good to consider only reflected light and not specular highlights, since they do not contain any detail or information) and the representation of those values as tonal values in the recording medium (in the camera, film or sensor).

The difference between the range of brightness in the scene and the translation of the brightness levels into tonal levels recorded by the camera (density in film, tonal levels in digital) is not immediately obvious, but the camera does not record brightness, but some analog of it, clumps of grain or numbers. To see the picture, the range of tonal levels must be translated back into a range of brightness values. We do this when printing a negative to photographic paper or viewing a transparency film slide through transmitted light (a projector or lightbox).

In the digital realm, the image rendered from raw capture data or printed to paper is the output, which must be translated into reflected or transmitted light so we can view the image. A complication in digital photography is the JPEG image, which places limitations on the original data. It would not matter either, if it were a TIFF image, since all images rendered from capture data have contrast curves applied to fit the image within the range of tonal values the format is capable of storing and to be "pleasing" to the eye. Linear-data is not pleasing to the eye because it contains too large a range of tonal levels and corresponding brightness range. It won't "look" like the original scene as the eye saw it.

When you are talking about dynamic range, you first need to ask, which range? Is it the range of brightness levels in the scene, capable of being captured by the sensor as input, capable of being rendered to output? Is it the range of brightness or tonal values you are considering?

The scene has a range of brightness values.

The recording medium (film or sensor) has a range of brightness values it is senstive (ISO comes in here) to and a range of tonal values it uses to express those values. The brightness levels are translated to those tonal levels (whether represented by density in analog film or by numbers in digital data).

The output medium has a range of tonal levels it is capable of storing and expressing as brightness values when viewed.

The complications come because of the need to match the range and step of tonal levels in the input to the output. Further complicating things is that the JPEG image has its own set of curves and translations, when printed the printer paper and inks place their own set of limitations and curves on the translation. The environment in which the image is viewed has its own limitations and effects on the brightness values percieved.

The capability of a camera or sensor cannot be judged by looking at a random example of output from a camera's JPEG engine. That would be the equivalent of judging a film by the quality of processing and printing from a randomly selected corner drugstore.




Comments

Popular posts from this blog

Minolta Lenses on a Four Thirds Camera

During the summer, I bought an Olympus E-510 digital single lens reflex camera. The 510 is a FourThirds camera and because of the of shallow flange of the 4/3 lens mount it is one of the most flexible cameras on the market when it comes to mounting legacy optics (lenses from traditional film SLRs). A 4/3 camera can mount "legacy optics" or lenses from several other manufacturers made before the DSLR era. Although unintended, this makes FourThirds a revolutionary mount. For the first time not only can a photographer mount lenses from different manufacturers who produce lenses to the "open" FourThirds standard, with inexpensive Chinese-made adapters lenses from nearly any manufacturer from the golden age of SLRs can be mounted as well. Third party adapters can be found for Olympus OM, Nikon, Pentax, Zeiss and Contax. The only one missing from the party was Minolta. I purchased an inexpensive OM to 4/3 adapter from ebay and mounted several OM lenses, a 50mm f/1.8, 50m...

Snowball, the Dancing Bird

A video of a dancing bird has become the latest YouTube sensation. Some people thought the bird's performance was faked, but for me, it is not surprising, given the sophisticated ability birds demonstrate for manipulating pitch and rhythm in their songs, that a bird shows the ability to keep time with music. Neuroscientists, including John Iversen of the Neurosciences Institute, have studied the dancing bird and confirm it is capable of extracting a beat from sound. What impressed me most about Snowball's performance is when he lifts his leg and gives it a little shake before bringing it down. As the investigators mention, it may be prompted by the pace being too fast to put his foot all the way down in time with the faster beat, but it piques my curiosity further. It appears Snowball is dividing the beat when he waves his foot, into two or three little waves, which if I am seeing it correctly, suggests birds are capable of division of the beat and perceiving and manipulating ...

Facilitating the Conversation

I was prompted by something Andrew Shafer of Reductive Labs said (on the FooCampers list, so I won't reproduce it here, since it was forwarded to me) about the quality of communication among software developers. He was talking about how communicating the overall design and intentions of the project is vital, so the developers are not left guessing about how the application will be used and what its architects think it should do. What is important is the existence of a conversation between the leaders of a project and the developers writing the code. This hits very close to home, because our farmfoody.or g project is essentially there to improve the flow of information between producers and consumers of food, to enable a conversation . It occurred to me the solution is to throw away the flash cards and bulleted design specifications and just facilitate the conversation. Why not use social networking tools for developers to communicate? (You can get a sense of another approach from ...