Quick Links

The Pixel phones have been out for several weeks now, and a lot has been said about the camera. I will try to comment on some of the things that I’ve felt have been a bit overlooked, in a deeper dive into the Google Camera 4.2 on the Pixel XL. Most of the issues, if not all, should also apply to the smaller Pixel phone.

A note on pixel size, sensor size, and aperture

Starting with last year’s Nexus phones, Google has been advertising “bigger pixels” in their camera sensors, frequently pointing out their 1.55µm size in marketing materials. Relatively speaking, they’re larger than the ones on most other phones and should lead to lower noise, at least according to common belief. A bigger sensor was also marketed as bringing with it better image quality, but not as much emphasis was put on this as the size of the individual pixels. So which is it: bigger pixel size, bigger sensor size, maybe both?

With smart features like HDR+ there is some muddying of the waters here, because HDR+ yields such good results, especially under low light, that many people were quick to attribute this better quality to the bigger pixels. But we shouldn’t forget that the Nexus 5 and Nexus 6 also had very good low-light performance when using HDR+, relative to the competition.

Photography is, at its most fundamental, capturing light. Generally speaking, the more of it you can collect, the more useful signal you have, which means cleaner, less noisy photos. When it is said that a bigger sensor can gather more light, it’s not only because of the bigger sensor. In order to keep the same f-number, shutter speed, and ISO, the bigger sensor will require a bigger entrance pupil for the same field of view. (You can think of the entrance pupil as how big the physical aperture will be "seen" by the incoming light.) The larger entrance pupil is what will actually let more light in. Therefore, it’s a function of both the bigger sensor and a larger entrance pupil to match it.

If you have a larger sensor but also a higher f-number (e.g. f/2 vs. f/1.7), you will need to raise the sensitivity (ISO) in order to match the exposure. Bumping up the ISO also means more noise, which basically cancels out the advantage of the bigger sensor. So, for a given field of view, a bigger entrance pupil helps lower noise. A bigger sensor by itself doesn’t necessarily, it’s just that with a larger sensor you should get a bigger entrance pupil for the same field of view and f-number.

As luck would have it, all this is directly related to current competition for the Pixel phones, because the Galaxy S7 and S7 Edge have a smaller sensor, but a lower f-number. The Samsung phones have a 4.20mm f/1.7 lens, and a similar field of view as the Pixels, which use a 4.67mm f/2.0 lens. What this ultimately translates to is that the S7 system should collect a bit more light overall than the Pixels’ system (2.47mm vs. 2.34mm entrance pupils respectively). In other words, claims that the bigger sensor of the Pixel phones should yield lower noise are negated by the bigger entrance pupil of the S7 phones. (To be clear, this is not because of the lower f-number of the S7; if its sensor was even smaller and the lens still was f/1.7, the Pixel could gather more light even at f/2.0.)

I believe this shows in the samples below, taken in “raw” with Manual Camera and processed without noise reduction or sharpening. The S7 shows more noise at the same ISO, but since it has a f/1.7 lens, you can use about a half-stop lower ISO and keep the same shutter speed. (These exposure settings are the closest that Manual Camera gives to a half-stop, and they should give a slight disadvantage to the S7E.) At these equivalent ISO and f-stop settings, noise difference is not clearly discernible. Would you give up OIS in order to have a bigger sensor if you knew that the real-world difference was this inconsequential? (More on that in the OIS section.)

Left: Pixel XL (1/125s - ISO 1600), Center: S7 Edge (1/190s - ISO 1600)

Right: S7 Edge (1/125s - ISO 1200)

After all this, you might notice that pixel size doesn’t enter the equation here, and that’s because in practice it really doesn’t in any significant way. Over the years since they became popular (circa 2005), cameras with bigger sensors, such as the Canon 5D series, have been known to have better quality due to the bigger sensor (and the bigger aperture lenses the system implies), but I have never seen tests that definitively pin better quality on bigger pixels. On the contrary, pixel size has been steadily decreasing on these big-sensor cameras (due to increasing megapixel count), with overall picture quality not decreasing. My 12-megapixel 5D has nothing on the 22-megapixel 5D Mark II. And while I haven’t tested them personally, I think it’s safe to say the 5Ds with its 50 million smaller pixels does not produce any less stunning quality than the 30-megapixel 5D Mark IV. On the mobile side there are also some examples, like the HTC “Ultrapixel” cameras which ran on the “big pixels” marketing almost to a fault, and ultimately failed to impress at all.

A final observation about pixel size. I think this belief largely stems from people viewing images at 100% (1:1) on their monitors. This is after all how most photography review sites such as DPReview have done noise tests since time immemorial. But that’s misleading, because while it’s true that an image from a higher-pixel-density sensor will generally show more noise at 100%, the physical size of the image will be larger, and objects in the frame will also have more pixels defining them. As it happens, these two issues can basically cancel each other when you view the images at the same physical size. To be clear, I'm not saying that pixel size cannot possibly have an influence. What I'm saying is that the observed differences are perfectly explained by other means, and that (to my knowledge) there's never been any conclusive evidence that pixel size influences noise in modern digital cameras.

What does HDR+ do?

HDR+ is a clever way of dealing with the issues that inherently come with having a small sensor with a tiny lens in a compact device. Astrophotographers have used this technique—called  image stacking—even before widespread usage of “proper” digital photography tools, with digital video and specialized software. In my experience, stacking was always something a bit daunting and too out-there for a relative newcomer to photography to try out. Even for more advanced users, practicality for non-astro work was very limited, since it required careful setup, special software, and time-consuming processing. With newer, fast imaging sensors and processors available, this finally hit the mainstream in the past few years.

Image stacking, as its name implies, takes many frames and “stacks” them, blending them all into one less noisy picture. The reason this works is because this type of noise is random, and it doesn’t get added in the same way as the “signal.” As computational photography team leader at Google Marc Levoy put it:

Mathematically speaking, take a picture of a shadowed area — it's got the right color, it's just very noisy because not many photons landed in those pixels […] But the way the mathematics works, if I take nine shots, the noise will go down by a factor of three — by the square root of the number of shots that I take.

Traditional HDR techniques require you to have at least two (but usually more) differently exposed pictures, and then use the less noisy dark parts of the brighter pic, and the non-saturated bright parts of the darker one. This often means you need at least one longer exposure, which can be an issue in low-light scenarios, and that’s why it’s typically not used in those cases. HDR+ underexposes the multiple pictures it takes (thus allowing for faster shutter speeds), and then adds them up. Underexposure certainly makes for more noise, but keeps highlights from getting clipped. By combining the pictures, noise goes down, but highlights stay unclipped. We often think that HDR is useful only under sunny conditions, but we shouldn’t forget that just the act of lowering noise increases dynamic range as well, so it also applies for low-light photography.

Another good thing to come out of this is that HDR+ actually provides an advantage over OIS. Both require a longer capture time, but since HDR+ spreads shorter exposures over this interval, blur due to subject movement (for which OIS is ineffective) is minimized.

The front camera is often overlooked, but since it has a smaller sensor with an even tinier lens than the rear camera, HDR+ helps quite a lot on it, and thankfully since the 2015 Nexus phones Google added the ability for the front camera to use it. Your party selfies have never been clearer.

HDR+ in the Pixel camera

In the Pixel phones, HDR+ works similarly as it did in Nexus phones, but there’s a difference now both in behavior and functionality. First of all, HDR+ auto is still meant to be the default mode, but now the camera reverts to it after you close the app. (Still not 100% sure that this is not a bug, but at least three camera updates have been out since release and it’s not been “fixed.”) I personally liked to keep HDR+ on as default, which brings me to the next point.

Functionally, there actually is a difference between the automatic and manual HDR+ modes, and this is new with the 4.2 camera in the Pixel phones. The HDR+ auto mode is the one that Levoy talked about; it starts snapping pics as soon as you open the camera, and when you press the shutter button it instantly captures the latest ones. HDR+ on works like before. You press the button, a “progress circle” indicating that the pics are being taken appears, and then it processes them. If you notice, in the Verge interview they mention in passing that manual HDR+ is “slightly higher quality,” but don’t say how.

From what I gather, the HDR+ on mode does potentially yield better results, sometimes very noticeably. Not so much less noise, but a bit more detail and it does not clip the highlights as quickly. This is influenced greatly by the lighting in the scene. For low-light photography HDR+ auto seems to clip highlights very similarly as if HDR+ was off, with the main quality difference being less noise and more detail in the other parts of the image. In these situations, I definitely recommend setting HDR+ on.

Left: HDR+ auto (1/15s - ISO 4519), Center: HDR+ on (1/10s - ISO 854)

Right: HDR+ off (1/15s - ISO 4519)

In the images above, the scene was very dark. Notice the same shutter speed and ISO of HDR+ auto and HDR+ off. This would account for similar clipping of highlights in the pictures, since, unlike noise, blown highlights cannot be made better by stacking: oversaturated parts are already lost information.

Left: HDR+ auto (1/30s - ISO 955), Right: HDR+ on (1/40s - ISO 462)

I would have happily taken all pictures inside the museum in "HDR+ on" mode but the camera kept reverting to "HDR+ auto."

One subtle difference between HDR+ auto and HDR+ on is in the darker parts, where the former may show just a bit less noise, and the latter can look grainy. HDR+ auto also seems to have more contrasty (darker) shadows, which helps in hiding noise. Preservation of fine detail including color is a tad better in HDR+ auto. But this is actual “pixel peeping”; you’re most likely better off with HDR+ on avoiding the clipping of highlights. In usual shooting and viewing circumstances noise difference is negligible.

Left: HDR+ auto (1/30s - ISO 596), Right: HDR+ on (1/60s - ISO 271)

Left: HDR+ auto, Right: HDR+ on

Note the difference in color detail in “California.”

This is a bit of speculation, but I'm thinking that in low-light scenarios HDR+ auto prioritizes the lowest noise level possible, so it doesn’t underexpose. This comes at the cost of clipping highlights, which may not seem as important in low-light photography. Personally, I prefer non-clipped highlights and I’ll gladly take the small increase of grain-like noise.

The Exif information is consistent with it: if there is a bright part in the otherwise dark scene that might get clipped, the shutter speeds are consistently faster (as much as 1 stop, or 2X), and/or the ISO lower for the HDR+ on images, than the HDR+ auto ones.

In bright situations, where noise is not bound to be a problem, HDR+ auto may yield very similar results as HDR+ on. Interestingly, HDR+ on sometimes still underexposes more, either by lowering the ISO, using a faster shutter speed, or both. I noticed this happened more when using the "spot" focus/metering mode (tap on the part of the scene where you want to expose properly). You can see it in the samples below, in which the metered spot was on the lower part of the bench:

Left: HDR+ auto (1/900s - ISO 50), Center: HDR+ on (1/1500s - ISO 50), Right: HDR+ off (1/230s - ISO 50)

One instance in which HDR+ auto and HDR+ on behave the same is when the flash is enabled. It even shows the circle on HDR+ auto while it’s taking the pics, so it doesn’t pre-take them (obviously it can’t unless the LED was on the whole time). Previously, on version 3.x of the camera, you could not enable HDR+ and flash at the same time, but this was thankfully changed in the 4.1 update.

The LED is too weak to work as “fill flash” in a sunny day (a technique that consists of using the flash to “fill” very harsh shadows, for example in faces), but in darker conditions it may work. For instance, you may have noticed that when you take a picture indoors and there’s a window, outside being brighter, either the outdoor scene will be “blown,” or the indoor one will be too dark. As long as the difference is not too great, the flash combined with HDR+ will capture a more balanced image, with the LED trying to illuminate the main subject. This is also influenced by how close the camera is to the subject, since direct flash light falls off very rapidly due to the inverse square law, and the LED is very low-power compared to a real flash. It’s true that HDR in general is made for these situations, but flash may give your subject a bit more “pop.”

HDR+ also makes for an equivalent technique to fill flash called “dragging the shutter,” in which the shutter speed controls the ambient exposure, and the flash lights the main subject. This means that often the shutter speed is slower in order to capture the ambient in low light. Since HDR+ can be equivalent to a slow shutter speed, it’s well suited for this task. In dark conditions, if you disable HDR+ and the camera is very close to the subject, it may produce a harshly lit picture and disproportionately dark background (because of the very rapid light falloff). Strangely, with HDR+ off the LED actually flashes brighter when taking the picture than with HDR+ enabled, and this exacerbates the effect. You might find that HDR+ with flash produces better selfies, for example, depending on lighting and background.

Left: HDR+ off, flash on; Mid-left: HDR+ on, flash on

Mid-right: HDR+ on, flash on, WB-corrected; Right: HDR+ on, flash off

One word of warning when using HDR+ for this, though. It seems white balance is a bit off in some situations, like when the background is predominantly warm (tending to yellow/orange), so your main subject might have a blue balance to it. A “flash” WB preset would have been useful in this situation, where the subject takes priority. As far as I can tell the N6P doesn't suffer from this. In any case, it’s usually a quick color temperature fix after-the-fact, which pretty much any editor (now even Google Photos!) can do.

Is the lack of optical image stabilization a big deal?

I’ve noticed there’s something important that is not usually said in discussions about the lack of OIS in the latest Google phones, which is that OIS works in third-party cameras. People use these apps for a lot of things: a quick social media post, check and receipt scanning, or even video streaming. Some apps like Camscanner have their own camera interface but also give you the option to use the “system camera,” which is the Google camera sans some of the options like panoramas, photospheres, and HDR+. It would be great if Google allowed at least the “system camera” to use HDR+, if OIS is not available. I’m not sure how feasible it is to add the EIS and HDR+ functionalities to the Camera2 API, but that seems like the ideal solution in the case that there’s no OIS.

Even in the Google camera itself, OIS could help with photospheres and panoramas when lighting doesn’t allow fast shutter speeds. It would be awesome if it allowed HDR+ in photospheres and panoramas, though.

I mentioned above that in some cases HDR+ may have an advantage over OIS with a moving subject, but that's not the whole story. (It is also a false dichotomy: we can have both.) OIS cannot be totally replaced by electronic IS for video. The reason for this is that EIS works by aligning contiguous frames so that when you play them, the video doesn't look shaky.

Shaky video is not the only thing OIS fixes, though. It also minimizes blur due to camera shake in each individual frame when a slower shutter speed is used. The slowest shutter speed that you can record on video is 1/30 seconds (on 30 fps videos), so this format is most susceptible to having this artifact. On 60 fps you're less likely to encounter it, because the shutter speed should stay at least at 1/60 seconds, which is fast enough to not blur frames if you’re moderately careful. The downside of this is that in order to keep a shorter shutter speed, sensitivity has to go up, and so does noise. So you may see stronger noise reduction in 60 fps under dark conditions (strangely, in 4K video noise reduction seems to be very weak, resulting in a much noisier video). If you are shooting in low light try not to zoom in, because that amplifies these issues.

Bottom line, I think OIS still would have helped in various scenarios where neither HDR+ nor EIS can, and personally I would prefer it even if OIS meant no fancy EIS (I’m not fond of the “panning by joystick” that EIS produces). But who knows what kind of compromises Google had to make while designing the phone. What I can say is that having a bigger sensor (and a bigger lens, a smaller entrance pupil, and no space for OIS without a “camera bump”) may not be all that’s cracked up to be, especially if you can do like Samsung and put OIS in a lens with a larger entrance pupil to match that smaller sensor.

Exposure compensation

The Google camera, and even the firmware for the Nexus 6, Nexus 5X and Nexus 6P, have had a sketchy history with this function, which in my opinion is a very basic and fairly important one. It was previously misnamed “manual exposure,” and you had to enable the option by going into advanced settings. The firmware issues (which prevented even third party cameras from offering exposure compensation) have been fixed by now with Marshmallow for the N6 and Nougat for the newer Nexus phones, so third party cameras can use EC. But with the current stock 4.1 camera, these three phones still haven’t gotten the function back. This will get addressed in the 4.2 camera, as people on the 7.1.1 preview have reported. As an aside, my Nexus 5 on Android 6.0.1 has “manual exposure,” but for some reason it is stuck on version 3.2 of the camera.

On the Pixels, EC works just fine, and even better than the previous iteration. It is a continuous slider, whereas on Camera 3.x there were five discrete 1-stop steps (doubling or halving brightness at each point), from “-2” to “+2.” One thing to note, though. With the new EC function, the slider is not available if you don’t tap on the screen, which will enable “spot” metering/focus.

Focus and exposure lock

As before, tapping on the screen will meter (calculate exposure) and focus on that point. What is different now is that focus and exposure will actually stay locked until you reframe the picture. Previously, the lock would just last a few seconds, which much of the time wasn’t nearly enough.

If you are locking exposure and using HDR+ it might underexpose the subject, because it tries to avoid clipped highlights. If you have bright highlights, and you’re locking on to a subject in the dark, you might get better results with HDR+ auto which, as we’ve seen, is not too aggressive in avoiding clipped highlights, or you can apply exposure compensation at your discretion.

Focusing speed is considerably faster than on the Nexus 6P. Phase-detection autofocus seems to be doing its job. In dark conditions, focusing is also good thanks to the laser assist, though of course it can’t be as fast as in bright conditions.

Still missing is the now ubiquitous face detection focus, which I would personally find very useful. Fun fact: the Nexus 4 camera had it (along with other “new” features like exposure compensation and white balance presets).

Also missing is flash focus assist. On previous versions up to 4.1, in completely dark situations the LED would light up while trying to lock focus before taking the picture. Hopefully this is just an oversight, but it probably won’t affect many people anyway.

White balance presets

It would probably be too much to ask for manual white balance settings for this app, especially coming from Google, but at least having presets can help in some situations. In my experience, auto white balance rarely misses, but in those cases it’s good to have a few other options. It’s true that you can modify colors later, but doing so in a processed picture instead of applying color balance at the time of taking it could be limited in comparison. You don’t have the same latitude for color changes; plus you will likely need to re-compress the jpeg file, so it may lose a bit of quality.

Left: Auto WB, Right: “Tungsten” WB preset 

Difficult lighting and contemporary art can mess up your white balance.

I know a lot of people want a raw output option in the Google camera, and that would bypass in-camera white balance settings, but in my opinion the quality advantages of HDR+ (which as far as I know can’t produce a raw image) outweigh the slight convenience of lossless color correction in post processing. So if one has to give up HDR+ in order to “shoot raw,” I don’t see most users wanting to take the overall quality hit in order to have meticulous control over color.

The "halo" issue

I'm not even sure that this is a big issue, to be honest. Halos are a type of lens flare, and even the most expensive lenses exhibit some of it in the "right" conditions. Camera lenses are typically composed of several glass "elements," which cause internal reflections and light scattering, and that manifests as flare. I have attached samples from the Pixel XL and iPhone SE below that show the artifact. You can see different types of flares here, like halos and rays. These may show when there is a very bright source of light—like the sun—inside or a bit outside the frame. The bottom line is that these are very difficult scenarios for any lens not to exhibit internal reflections, especially with non-professional equipment. Frankly, I’d rather leave it alone than gamble with any software fix that Google might come up with, which could show its own share of artifacts.

Left: Pixel XL, Right: iPhone SE

Speed

One of the most celebrated improvements of this year’s Google phones is speed. And the Pixel XL is fast, especially compared with the Nexus 6P, which throttles very quickly, apart from just being slower in general. The HDR+ processing is much faster because it’s done in the Snapdragon 821’s dedicated HVX core. The CPU—which is a jack-of-all-trades processor—is avoided. The result is more efficient and faster HDR+ computing.

On the Nexus 6P with Google Camera 4.1, you can only take three HDR+ shots in succession, then the shutter button is disabled. After the third shot, you have to wait until the camera finishes processing the first one, then you can take three more. This limitation is compounded by the longer HDR+ processing, and it’s made even worse by throttling. This issue has hit me more than once in real-world shooting. In the LA summer, it was a terrible experience taking more than just a few snaps outside. If I remember correctly, this limitation was put in one of the later camera updates. Before, I could take more HDR+ shots, but a few times I experienced app crashes and lost the pictures.

With the Pixels, this is fixed. The camera in the XL can handle up to seven HDR+ shots being processed at a time, and you’ll be hard-pressed to find yourself in that situation in real-world scenarios. If you take a second between shots, the first shot will have already been processed before you hit the limit. In practice, you’ll probably never run into the shutter button being disabled because of HDR+ shots in the processing queue.

If you’re using HDR+ auto, this is where the “auto” part kicks in: instead of graying out the shutter button when the processing queue is full, HDR+ is disabled and the picture taken, and then it is enabled as the queue clears up. This means that you can pretty much keep taking pics indefinitely. After you’re done, not all of them will be in HDR+, but the six first will be for sure. One advantage in this case is that you can take consecutive pictures at a much faster rate than with HDR+ on, so, if you're not worried about clipping highlights and just wanna keep noise down, you may find auto more useful.

Another thing that disables the shutter button for HDR+ pictures is processing panoramas and photospheres. This could be an issue when you’re out shooting, especially if you take all your pics in HDR+ on mode like myself. But again, processing is faster, so you wait less.

Just to get a sense of how fast the Pixel XL is, I compared it with the Nexus 6 and Nexus 6P by taking five full photospheres in succession and timing the processing for each run. One test was inside a small office in cool ambient temperature (~71°F), and the other in my apartment on a hot day (~87°F inside). The “panorama resolution” setting was set to “high,” and the three cameras produced photospheres with 8704 x 4352 pixel dimensions.

Device

1st

2nd

3rd

4th

5th

Pixel XL

28s

29s

25s

28s

28s

Nexus 6P

35s

46s

54s

56s

58s

Nexus 6

1m 05s

1m 05s

1m 04s

1m 37s

1m 41s

Cool office (~71°F)

Device

1st

2nd

3rd

4th

5th

Pixel XL

29s

30s

32s

37s

33s

Nexus 6P

1m 28s

2m 26s

2m 50s

2m 32s

2m 29s

Nexus 6

1m 00s

1m 06s

2m 15s

2m 12s

2m 17s

Hot living room (~87°F)

The Pixel doesn’t even break a sweat, and the results of the Nexus phones are telling. They are fairly quick to throttle, especially the 6P. Also note that in warm ambient temperature, the Nexus 6 processed a photosphere faster than the Nexus 6P at every step.

The Lens Blur mode is also blazing fast in rendering the image. It may be that the Pixel is using the dedicated core for panorama and blur processing, but as far as I know neither Google nor Qualcomm have stated anything to that effect.

Final Words

In my opinion, this is the best overall camera (combined with the hardware capabilities of the Pixels) that Google has put out. The app may not have the most features (the Nexus 4 camera, remember?), but it’s slowly gaining them back. In raw number of options it’s also not very competitive with other OEMs and third-party cameras, but very useful features like HDR+ and cool extras like photospheres make up for it, especially now that very basic options like white balance presets and exposure compensation have returned. Your mileage may vary but, for most people, shooting great pics under difficult lighting scenarios is a better deal than the ability to shoot raw, set a color filter, or have fully manual controls.

As for quirks, the reverting to HDR+ auto mode every time the camera opens is a bit annoying, and I’m not completely sure if it’s a bug rather than what Google wants. The LED focus assist issue may be an oversight, hopefully they bring that back in the near future, though it might not affect a lot of people. Another addition that would be nice is audio capture from external microphones. The iOS camera has been able to do this for a long time, and it can help quite a lot for good audio quality in your videos.

I still would have liked OIS and features like face-detection focus (for instance for blind selfies or difficult-angle shots with the higher-quality rear shooter), but it’s nonetheless a great camera app that integrates well with the hardware, especially for the fast processing of the fancier features. Just the speed gains themselves over the N6P would have been worth it for initial release, though eventually it would be nice to regain at least the features the Nexus 4 and Nexus 5 (time lapse) cameras had when they came out. Also, we are still waiting for some hinted-at pretty great features; hopefully they’ve not been canceled, but that was a long time ago.