This article is part of a directory: Mobile Photography Week 2023: Our big phone camera celebration
Table of contents

Cameras are complex instruments with complicated components that work in concert to create an image. Even though these components (and their specifications) play a critical role in determining the ultimate quality of an image, the only tech spec that gets any lip service in marketing literature is the megapixel count.

But is that enough to make an informed decision when purchasing a smartphone? Are megapixels the final word when it comes to smartphone photography? And what's the best Android phone for smartphone lovers?

Banner with the AP logo in black along with a black and gray smartphone. The text "Mobile Photography Week" appears horizontally and a "2023 flag is diagonally positioned in the right corner.

Is megapixel count the only relevant metric for smartphone cameras?

The megapixel count isn't the sole factor you should consider when shopping for a phone. Although megapixel counts are important, other variables affect image quality. The hardware, software, and your preferences play a part in the perception of how good a picture is. To understand why the megapixel count isn't the ultimate arbiter of camera quality, it helps to know how smartphone cameras work, what each part does, and how their performance is quantified.

Light is the most critical thing cameras need to produce a picture. Professional cameras can control how much light they receive by adjusting the aperture (the opening between the internal workings of the camera and the outside world). Smartphones mostly don't have that luxury.

Samsung released some flagship phones a few years ago with variable aperture. However, aside from Huawei's Mate 50, phone manufacturers don't want to use the space or spend the money to put the feature in their phones. Besides, photo processing can achieve most of the optical effects produced with a variable aperture, so there isn't much need until the tech progresses.

Since a variable aperture is mostly out of the question (for now) and because everything about mobile cameras is miniaturized, as much light as possible needs to reach the sensor. Therefore, you want the largest aperture possible. Aperture is measured in f-stops. The smaller the number, the larger the opening.

A chart that shows aperture and f-stop details
Source: Wikimedia Commons/Jason Fredin

Another important consideration is the focal length of the camera. To understand focal length, you must understand the basics of a traditional camera. Light passes through the camera lens, where it is focused to a point before being projected onto film or a sensor. In this setup, the focal length is the distance between the camera's sensor/film and where the light is converged. The lower the focal length (measured in mm), the wider the angle of view. The higher the focal length, the narrower the view and the greater the magnification.

The focal length of a smartphone camera is around 4mm, but that number is meaningless from a photographic point of view. Instead, the focal length of smartphones is given in 35mm equivalent. What focal length would you need on a full-frame camera to achieve the same angle of view? A higher or lower number isn't necessarily better or worse, but most smartphones have at least one camera with a wide angle and a short focal length because most people want to capture as broad a scene as possible with their photos.

A comparison of different camera focal lengths

The lens is fundamental to the camera's focal length. The lens comprises multiple lenses (called elements) and the lens protector. The job of the lens is to bend and focus the light onto the image sensor. The problem with light is that it doesn't bend a fixed amount. Red light and blue light bend differently.

Smartphone makers use multiple elements and software to compensate for these aberrations and distortions. No lens allows for the perfect transmission of light, so there's always a little loss. The drawback to having more lenses is having less light on the sensor. Where does this light go? It's reflected. These reflections can show up in your photos and videos as lens flare.

The physics behind distortion and reflection are complicated, which is likely why phone makers don't tend to publish information on their lenses with their other camera specs. You'll have to use the camera rather than rely on metrics to determine if it meets your standards. One way to test the lens quality of a phone camera is to look for a type of chromatic aberration called purple fringing. This is when the edges of high-contrast objects have a purplish haze. This effect can be easy to miss, so it's not a big deal for most people, but better lenses reduce or eliminate this effect.

Hair on the head of a horse with the chromatic aberration purple fringing

How important is the camera sensor?

The camera sensor is a piece of hardware that converts raw optical data (light) into electrical information. The sensor's surface is covered with millions of individual photosites that create an electrical signal based on the intensity of light it receives.

The larger the individual photosite (measured in micrometers or μm), the better it captures light. It can reproduce a truer value, especially in low-light or dynamic-light situations. The trade-off to larger photosites is that fewer can fit on the image sensor, so having a larger sensor (measured in fractions of an inch) is important. A bigger sensor is almost always better.

Samsung Isocell HP2 200MP camera sensor
Source: Samsung.com

Photosites only measure the intensity of light, not its color. To extract color data, image sensors cover each photosite with a color filter (typically a matrix of red, green, and blue). The arrangement of this color filter array is known by the image processor, which applies those colors to the luminance values of each photosite, then uses that information to produce a full-color image.

Most phones use a Bayer color filter that's made up of 50% green, 25% red, and 25% blue filters (RGGB). The preponderance of green is because the human eye is better at seeing green than other colors. Some phones use a filter array that cuts those numbers in half: 25% green, 12.5% red, and 12.5% blue, with the remaining 50% reading unfiltered luminance values (RGBA). This arrangement is good for picking up extra luminance data but at the cost of color data.

Some digital cameras experimented with cyan, magenta, and yellow (CYYM) instead of blue, red, and green because it lets more light into the photosites than RGB filters. This was abandoned because it was difficult for software to reconstruct an accurate image from CMY data.

Huawei made a phone that used the traditional Bayer filter but replaced the green filter with a yellow filter (RYYB) to boost the light collection of its photosites. We liked the phone and its robust night shots, but a yellow tint sometimes showed up in the pictures.

One popular alternative to the Bayer filter is the Quad Bayer. It uses the same pattern as the RGGB filter, except each color covers four photosites instead of one. This arrangement is good for low-light photography and reduces image noise.

Five different CFA arrangements

Computational Photography

The final piece of the smartphone photography pipeline is the image processor. This is where the fundamental electrical data produced by the photosensor is transformed into a proper photo. Every OEM has its own pipeline for this process, but many of the steps are the same for all manufacturers.

  • Analog-to-Digital Converter (ADC): Each photosite on a photosensor measures the intensity of light that strikes it as an analog value. The range of values it can register (how many shades of black and white) is known as its bit depth. The result of this step in the pipeline is a black-and-white image.
  • Demosaicing/Debayering: Once a rough black-and-white image has been produced, the image processor determines the pixel colors based on the arrangement of the CFA and the intensity of light registered by the corresponding and adjacent photosites. This produces a RAW image.

From here, the steps taken by the image processor become more subjective and can vary significantly from one manufacturer to another. The steps below are in most image-processing pipelines.

  • Color balancing/White balancing: This is the process of adjusting the intensities of the colors in a shot. One of the goals of this process is to make sure neutral colors like white and gray appear correctly, neither too warm nor too cool. Color balancing can also change the mood of a shot, making it appear warmer, but if taken too far, it can color-shift the photo.
  • Noise reduction: All electronic signals are subject to interference (noise) which usually manifests as out-of-place, over-bright pixels. Photographs taken in low light are particularly susceptible to noise. Most image processors do their best to eliminate it. Some processes for noise elimination are better than others, and all of them sacrifice fidelity for a more pleasing image.
    Noisy image contrasted with a denoised image
  • Gamma correction: Your phone's image sensors do not see the world the same way as our eyes do. This is particularly true when it comes to luminance values. In particular, our eyes are very good at differentiating darker shades. This means that what our image sensor perceives as 50% light intensity appears to us as around 75% light intensity. Gamma correction adjusts the image's luminance values to align with how we perceive the world.
    How gamma correction adjust luminance values to closer align with human perception

One of the greatest advantages phone cameras have over their professional DSLR counterparts is the electronic shutter. Whereas a DSLR relies on a slower mechanical shutter to control how much light its sensor is exposed to, phone sensors use electronic shutters, which allow dramatically shorter exposure times. This means our phones can take a burst of photos in a very short time, allowing for novel post-processing techniques.

  • Multi-exposure HDR: Exposure refers to how long the image sensor is exposed to the light of the scene. Shorter exposures better capture the brightest parts of an image while leaving the darker parts too dark to see detail. Longer exposures bring out darker details but blow out brighter details, drowning them in white. By taking multiple photos at different exposures and combining them, your phone can produce a photo with a greater range of light intensities.
  • Multi-frame noise reduction: One of the most common sources of noise in photos comes from shooting in low light. To compensate for the lack of photons, your phone increases its ISO sensitivity — how sensitive it is to light. The trade-off for capturing more light is a higher propensity for noise. The idea is similar to astrophotography image stacking. Multiple photos are taken at a higher ISO setting (lower light sensitivity), then combined to eliminate noise and progressively increase the exposure.
    Samsung multi-frame noise reduction pipeline
    Source: Samsung.com
  • Portrait mode: Most phone cameras have a long, fixed depth of field, meaning that close objects are just as in focus as objects that are far away, but from an aesthetic perspective, that's not always what you want. Often, it's desirable to have a shallow depth of field where only the photo's subject is in focus and the background is blurry. To achieve this, some phones take more than one image (sometimes from different cameras) to get a rough idea of the distances of different parts of the photo to separate the background from the foreground and selectively apply a blur.
    How Google Pixels process depth for portrait mode
    Source: googleblog.com

Every OEM has its process for applying these steps, and given the same RAW data, Samsung, Huawei, Pixel, and iPhone produce different images. No method is better than another from an objective point of view. Some people prefer the Pixel's HDR-heavy processing over the iPhone's more conservative and natural look.

So, do megapixels really matter?

Absolutely. We expect to capture a degree of authenticity when we take a photo. We generally want our photos to be as close to real as possible, and visible pixelation shatters the illusion. To preserve that illusion of reality, we need to approximate the resolution of the human eye, which, for someone with 20/20 vision, is about 720 pixels per inch as seen from one foot away.

If you want to print your photos in the standard 6-inch by 4-inch photo format, you need a resolution of 4,320 by 2,880 or 12,441,600 pixels, which is a bit under 12.5 megapixels.

A hand holding a photo of a body builder

But that raises the question: If 12 megapixels are near the limit of what the average human can see, why does the Samsung Galaxy S21 Ultra have 108MP? Phones like the S21 Ultra don't produce 12,728 by 8,485 images. Instead, they use pixel binning, where the phone's software combines the data from a square of adjacent photosites into a "super pixel."

This kind of software magic is what phones with Quad Bayer sensors use to improve performance. Instead of using one photosite to collect color information, it uses a square of four photosites to double the size of the photosite at the cost of final image resolution. Why not just make larger photosites? You could, but binning smaller ones offers advantages that larger sensors can't match, such as better HDR images and zoom capabilities.

Don't forget to save your photos

Now that you're versed in smartphone camera components, it's time to think about storage. If you're a smartphone shutterbug, don't rely on a single method to save your photos. Google Photos can help you organize and save photos, but you'll want backups if you get locked out of your account. A good Synology NAS is another solid photo storage option, but it and your phone are suitable for archiving important memories, as digital storage methods often become unstable after 5 to 10 years.

The Library of Congress provides a good overview of how to archive your print and digital media. If you have special needs, most research universities have staff who are experts in digital media archiving that can point you in the right direction.