How the Basic Science We Learned at School Can Improve Our Photography

admin

Understanding the basic physics of light will help us achieve the best photographic results. Here are some simple explanations about how the lessons we learned in science classes can affect our photography.

There’s a lot more to light than meets the eye. Okay, so that’s an awful pun, but it is both metaphorically and literally correct. It’s a complex topic that we don’t need to understand in order to live our daily lives. There is also a lot of light hitting your retina, which is more than the retina can convert to nerve impulses sent to the brain. There is less light than that landing on your camera sensor that gets turned into an electronic signal and transmitted to the camera’s processor.

In other words, our eyes can’t see all the light there is, much of it is invisible to us, but they see more than our cameras.

Learn to Use Your Camera’s Dynamic range

You can see the most details on a sunny day in the highlights of the sky as well as the shadows of objects under the ground. Your camera, with a single exposure, can’t see as wide a range of tones. As you can tell from the above image, sensor technology has dramatically improved. They are now able to detect both highlights and fine details in dark areas. It wasn’t that many years ago when I would have had to bracket the exposures of that scene and then combine them to create a high dynamic range (HDR) image to be able to see the top of the lighthouse when pointing the camera directly at the sunrise.

As we age, we gradually lose the ability to see a wide range of colors. We should be able see between 18 to 20 stops of range between black-and-white. The top-of-the line cameras have between 12 and 15 stop ranges, but a sensor that was developed in this year has a 24.6 stop range. Don’t worry too much about that, though; your camera will still take fabulous pictures.

The importance of our visible spectrum

What we call visible is a fraction of the electromagnetic spectrum, which we and our cameras are able to see. I find it quite incredible that we can differentiate the different colors of the spectrum that occur in a range that’s only 320 nanometers wide. This means that we can differentiate all seven colours of white light and their combinations.

It is fortunate for us that the majority (80%) of photons which hit the earth are in the band from 380 to 770 nanometers. This is the result of our planet’s location in the Goldilocks zone. It is exactly at the right distance to the right kind of star. In addition, our atmosphere contains an ozone film that prevents the harmful ultraviolet light from reaching us. If we were surrounded by more energetic particles like UV or Gamma radiation we wouldn’t last very long. On the other end of the spectrum, if we had longer wavelengths, it would require us to have larger eyes and make it difficult to thread a noodle.

What is the impact of light bending on our images?

As the earth spins and dawn arrives, we see the sun appear before it’s physically above the horizon. That’s because, like passing through a prism or raindrops, the light bends when it hits the atmosphere. This bending, or refraction of light, allows us to see below the horizon. It’s the same effect as when you put a spoon into a glass of water and see it bend.

White light consists of seven different colours: red, orange and yellow. Green, blue, violet, indigo and green are also present. You can see them in a rainbow or on the Pink Floyd album The Dark Side of the Moon.

Each color has its own wavelength, and so travels with a different speed. White light is broken up into its components because the colors bend differently. The red light is the slowest and therefore has less diffracted. On the opposite end of the spectrum violet light is the fastest and is refracted more. Dispersion is the process of splitting up light. The greater the refraction, the more dispersion. Diamonds, as expected, have the highest index of refractive indices, which is what makes them sparkle.

This splitting of the light is generally not wanted in photography. We don’t want our lenses to create rainbows. We see this as color fringing, or chromatic distortion around edges with high contrast. This fault is common in cheap lenses. The perfect lens would be free of aberrations, and all wavelengths would converge on one point on the sensor. In order to achieve this, manufacturers of lenses use multiple glass components within the lens which help bring the wavelengths together. Modern lens technology continues to improve and professional lenses today have no visible aberrations.

Light can also be diffracted when it hits an edge. Imagine the ripples of water that will form when they hit an obstruction. Light behaves the same way.

Why we usually avoid very small apertures

On a bright sunny day, stand outside and observe your shadow. You’ll notice that the outer edges aren’t as dark as the center. This lighter edge is due to the light that bends around you. In fact, the darker edge of the shadow is known as the penumbra and the darkest part is called the umbra. The further you are from the source of light, the bigger the umbra in relation to the penumbra and the sharper the shadow. This is something to take into consideration when using flash or studio lights.

When you take pictures, the light bends when photons bounce and bend around the aperture blades. This bending and bouncing is more prominent the smaller the aperture. That’s because the proportion of diffracted light to un-diffracted light is high at small apertures.

The reason why most photographers avoid using the smallest apertures is because of the increase in diffracted light.

Why the sky is blue

Not only does this bouncing – properly called scattering – occur when light hits boundaries, but also when light encounters other particles. Blue has shorter wavelengths and is therefore scattered more readily than red. This is why the sky is blue during the day.

The sky may appear whiter as we move toward the horizon. It is because more air is available for the blue light, which scatters repeatedly and removes its blueness. Furthermore, looking obliquely through the atmosphere the extra particles scatter other colors too, plus there’s also the light reflected from the planet’s surface affecting it. 

These factors cause the wavelengths to be mixed and produce white light. Over the sea the sky can turn blue when the water’s blue is reflected into the air.

CPL Filter Use

The scattered skylight is polarised. The blue light waves will travel in only one direction as opposed to moving randomly. This polarized motion is perpendicular with the light source. If you attach a circular-polarizing (CPL), you can reduce the amount of light that is reflected in the sky when the Sun is at 90 degrees. This will make the sky look darker.

Polarizing filters are great for taking away reflections off the water’s surface, allowing you to see more clearly what lies beneath because the reflection is polarized. You can also use a polarizing filter to remove glare off of damp leaves in autumn, which will allow their colors become more vibrant.

Physics Behind Those Glorious Sunsets

As the sun lowers in the sky, more of the atmosphere must be passed through before the light reaches you. The atmosphere is densely populated with dust and water vapour at lower levels. The blue light is scattered even more, and only the warm colors of red and orange reach our eyes.

Warm Colors Aren’t Warm at All

When I use the term warm colors I mean that they are warm psychologically. Reds, oranges, or yellows are considered warm colors, while blues are cold. In physics it is the opposite. Imagine a blacksmith warming a metal piece. It starts out red, and then turns yellow. It turns blueish-white as it heats up. Gas torch used by welders is very hot, and can melt steel. It’s a blue flame.

Your camera uses red, green and blue for a good reason

It is a common question. Why are photographic sensors and computer monitors using red, green and blue instead of other colors like red, orange and yellow? This is the main difference between engineering and science. The simplest engineering solution to achieve white is to combine just these three colors. Combining all seven colors on a computer monitor or camera sensor would be expensive and complex.

As with everything else in photography, there are compromises. Cameras and computer screens that use the primary colors red, green and blue can’t reproduce the full range of colors we see in real life.

Even that isn’t as simple as it first seems. The color range varies depending on the device. The virtual image is the most accurate version. It’s what your camera records when shooting raw and what your editing software understands your image to be. There is also a more limited version that appears on your computer screen, or the camera if it’s a jpeg. The version you can print is also different.

To achieve this, we use color management. We use color management to achieve this. Color management is the process of defining maximum values and minimum values for colors such as red, green and blue. Whole books have been written about this, and there’s far too much information to include in this article. It is important to keep all your devices (if you’re not shooting in raw) set to the same color space.

The most common of these color spaces to be used is sRGB. Adobe RGB was a more common profile with more colors. It was also the standard for digital printing at high-end. ProPhoto RGB is even more expansive and offers almost all colors for most printers. But things have changed, and now the printers that I use create color profiles for every type of paper and print. They provide the highest color accuracy.

It is sufficient for most photographers to simply remember to use color spaces that are no larger than the gamut on your device.

What is the Difference Between Subtractive and Additive Light?

Red, green and blue are the primary colors when we mix projected lights. They produce secondary colours: magenta cyan and yellow. The light is additive, so mixing green and red produces yellow. The printer’s color range can be determined by mixing different proportions of the colored lights.

When we use printer inks to remove colors from white, it is a different story. Inks reflect some light while absorbing others. By mixing magenta with cyan and yellow inks, in various proportions, we can create a new range of colors. This range of colors is called a gamut in color management.

By using a single color space we can ensure that we only use colors where two gamuts intersect. If we attempted to view or display colors that are outside the capabilities or your printers or screens, we’d get some strange results.

The Color You Choose is Not a Complete Control.

In a similar vein, you’ll find control buttons on the screen to adjust, at a minimum, brightness and contrast. So will everyone else’s. Calibrating your screen is an important step to ensuring your prints’ colors and tones match what you print. Of course, when you share an image online, most other people won’t have calibrated screens. Your images may look brighter or darker than they should, or have more or less saturation or contrast. It is impossible to change this. If you plan to print your photos and you want accurate results or if other photographers will be sharing the photos with you, calibrating your monitor is vital.

Please read more

This article is, of course, just scratching the surface of these topics and there is plenty to be learned under each topic I’ve covered. Fstoppers has over 33,000 articles that contain a wealth knowledge. Many of these articles go deeper into the topics I’ve briefly discussed here.

Next Post

Lukas Czinger Tells Us about His 3-D-Printed Hypercar Family Dynasty

There are many family trees in automotive history, some of which have been more successful than others. It is common to pass the torch from founders to their foundlings, but Kevin and Lukas are doing things differently. They have created and run a car business together. Kevin Czinger, an entrepreneur […]