![]() While lengthening the exposure time of each frame increases SNR and leads to cleaner pictures, it unfortunately introduces two problems. To make this feature work well includes several key elements, the most important of which is to capture more photons. ![]() With Night Sight, our goal was to improve picture-taking in the regime between 3 lux and 0.3 lux, using a smartphone, a single shutter press, and no LED flash. Phones that capture and merge several pictures (as HDR+ does) can do well down to 3 lux, but in dimmer scenes don’t perform well (more on that below), relying on using their flash. Smartphone cameras that take a single picture begin to struggle at 30 lux. To give you a feeling for different lux levels, here's a handy table: Technically, lux is the amount of light arriving at a surface per unit area, measured in lumens per meter squared. When photographers talk about the light level of a scene, they often measure it in lux. This allows HDR+ to produce sharp images even while collecting more light.īut if capturing and merging multiple frames produces cleaner pictures in low light, why not use HDR+ to merge dozens of frames so we can effectively see in the dark? Well, let's begin by defining what we mean by "dark". We also reject pieces of frames for which we can't find a good alignment. To keep these photographs sharp even if your hand shakes and the subject moves, we use short exposures. As it turns out, merging multiple pictures also reduces the impact of shot noise and read noise, so it improves SNR in dim lighting. All generations of Pixel phones use HDR+. The main purpose of HDR+ is to improve dynamic range, meaning the ability to photograph scenes that exhibit a wide range of brightnesses (like sunsets or backlit portraits). In 2014 we introduced HDR+, a computational photography technology that improves this situation by capturing a burst of frames, aligning the frames in software, and merging them together. But it’s hard to hold still long enough to take a good picture in dim light, and whatever you're photographing probably won't hold still either. Fortunately, SNR rises with the square root of exposure time (or faster), so taking a longer exposure produces a cleaner picture. These and other sources of randomness contribute to the overall signal-to-noise ratio (SNR), a measure of how much the image stands out from these variations in brightness. However, they are not, so a second source of noise are random errors introduced when converting the electronic charge resulting from light hitting each pixel to a number, called read noise. Every camera suffers from it, and it would be present even if the sensor electronics were perfect. For smartphone cameras, which have small lenses and sensors, a major source of noise is the natural variation of the number of photons entering the lens, called shot noise. ![]() Right: Pixel 3 Night Sight (full resolution image here).Īnybody who has photographed a dimly lit scene will be familiar with image noise, which looks like random variations in brightness from pixel to pixel. Left: iPhone XS (full resolution image here). In this article we'll talk about why taking pictures in low light is challenging, and we'll discuss the computational photography and machine learning techniques, much of it built on top of HDR+, that make Night Sight work. It works on the main and selfie cameras of all three generations of Pixel phones, and does not require a tripod or flash. Night Sight is a new feature of the Pixel Camera app that lets you take sharp, clean photographs in very low light, even in light so dim you can't see much with your own eyes. Posted by Marc Levoy, Distinguished Engineer and Yael Pritch, Staff Software Engineer
0 Comments
Leave a Reply. |