This article is part of a directory: Mobile Photography Week 2023: Our big phone camera celebration
Table of contents

Mobile photography is incredibly simple compared to traditional photography. Phone cameras make for a self-contained, point-and-shoot experience, freeing us from needing to understand the myriad functions, settings, buttons, and dials in dedicated camera hardware. But understanding a phone's camera specs isn't always straightforward (what does μm mean, anyway?).

Banner with the AP logo in black along with a black and gray smartphone. The text "Mobile Photography Week" appears horizontally and a "2023 flag is diagonally positioned in the right corner.

If you've ever scratched your head at the way phone makers describe their devices' cameras, you've come to the right place. We break down what some common terms used to describe mobile camera hardware mean, both literally and practically.

Resolution (MP)

The Samsung Galaxy S23 Ultra's rear cameras.

This one's simple. A camera's resolution equals the number of pixels in the photos it takes. Resolution is noted in megapixels (MP). One megapixel equals 1,000 pixels. So, images from a 12-megapixel camera contain 12,000 individual pixels.

Strictly speaking, a camera's resolution is a measure of how many physical pixels (discreet units that collect light, also known as photosites) are present on the camera's sensor. But when shooting at full resolution, a camera takes photos that contain the same number of pixels as there are on its image sensor, so think of resolution as shorthand for the photo size.

Pixel size (μm)

A light green Google Pixel 7 with the camera bar in focus.

The physical pixels on a camera's image sensor are tiny. In modern smartphones, individual pixels are a fraction of the width of a single human hair. You'll see pixel size noted in μm — a symbol that stands for micrometers. One micrometer is one one-millionth of a meter. The Google Pixel 7's 50-megapixel primary camera sensor, for example, has a pixel size of 1.2 μm.

Pixel size matters because the larger the pixels, the more able a given camera is to see in dimly lit conditions. Bigger pixels have more surface area to let light through. Many phone cameras compensate for their tiny pixel size through pixel binning, which uses software to digitally combine adjacent pixels into larger, more light-sensitive units by sacrificing resolution. The Samsung Galaxy S23 has a 50-megapixel primary camera with 1μm pixels. However, binning groups of four pixels takes photos at 12.5 megapixels with artificially large 2μm pixels, boosting low light performance.

Sensor size

A Huawei phone's camera module.

In mobile photography, sensor size matters for the same reason pixel size matters. A larger sensor has a larger surface area that collects more light, improving performance in dim settings. Bigger sensors can also accommodate more individual pixels without sacrificing light sensitivity. All else being equal, a one-inch 50-megapixel sensor performs better in low light than a smaller 1/1.2-inch 50-megapixel sensor.

Sensor sizes are often noted as two numbers. For example, the OnePlus 11's primary camera has a 1/1.56-inch sensor. It looks a little odd, but it's just a fraction. A 1/1.56-inch sensor is about 0.64 inches across from corner to corner (1 ÷ 1.56 = 0.641).

Aperture (f/stop)

The rear camera array on the OnePlus 10R.

A camera's aperture is the opening light that passes between the lens's glass and the image sensor. Aperture is noted as a number following an f. For example, the Samsung Galaxy A54's primary camera has an f/1.8 aperture. Aperture affects light sensitivity (a larger opening collects more light, resulting in better low-light performance) and depth of field. The narrower a camera's aperture, the sharper parts of a photo that aren't in focus appear.

Counterintuitively, the smaller the number, the larger the aperture's opening. All else being equal, a camera with an f/1.8 aperture takes photos that are brighter and have a shallower depth of field than a camera with an f/2.8 aperture.

As with sensor size, the aperture's strange-looking notation is also a fraction. The numerator, f, is a variable that stands in for a given lens's focal length. Plugging that number into the fraction tells you the diameter of the aperture's diaphragm, the circular bit that sits around the opening. (Focal length isn't discussed much in mobile photography, so we won't dig too deep into it here.)

Field of view

A smartphone camera's field of view is the area the camera can see at any given time. In mobile photography, the field of view is most often mentioned in the context of ultra-wide secondary cameras. For example, the Motorola ThinkPhone has an ultra-wide shooter with a 120° field of view. It all comes down to math. If you made a triangle by drawing a line between two opposing corners of the camera's field of view, between each of those corners and the camera sensor, the angle of the corner at the sensor would be 120°.

The same photo taken on the Pixel 7's 114° ultra-wide vs. the 7 Pro's 125.8°.

A larger field of view number means more stuff is visible in the frame. The Pixel 7 Pro's 125.8° ultrawide can see more at once than the ThinkPhone's 120°. Wider fields of view also mean objects closer to the camera can appear distorted.

Image stabilization (OIS, EIS)

The Sony Xperia 5 IV's camera module.

Image stabilization is what it sounds like. It's a technology that seeks to stabilize the images you take, minimizing the impact of slight camera movement and preventing blur. There are two kinds of image stabilization: optical (OIS) and electronic (EIS).

In mobile photography, optical image stabilization works by mechanically shifting your camera's lens slightly to compensate for your phone's movement. You'll hear a faint clicking near the camera module when you shake a phone with OIS. That's the lens shifting.

Electronic image stabilization works by digitally cropping in on an image. You may notice this effect when you swap from your phone's photo mode to video recording. When the phone detects it's shifted in one direction, it compensates in the image by digitally shifting in the opposite direction.

For more, check out our guide to the different types of image stabilization in mobile photography.

Autofocus (Laser AF, PDAF)

A picture of the top of the Samsung XCover6 Pro

You probably have a good idea of what autofocus is. But different phones use different methods to dial in the focus automatically. Standard autofocus works by detecting contrast between adjacent pixels, operating on the principle that in-focus objects in the frame naturally exhibit higher contrast. But the two types of autofocus you'll likely see pop up frequently are laser autofocus and phase detection autofocus, or PDAF.

Laser autofocus works by emitting an invisible beam of light from an autofocus module, which bounces off whatever the camera is pointed at. The phone measures the time it takes to detect the laser's reflection, and with some quick math, works out roughly how far away the subject you're trying to photograph is, dialing in focus to match.

PDAF is more complex. In smartphones, it analyzes the amount of light that falls on each pixel in predetermined pairs that, when the image is in focus, should be exposed to the same amount of light. If light falling on the two pixels in a given pair is uneven, the image isn't in focus over that particular pair. The degree of unevenness lets the phone know how to move its camera lens, forward or back, to achieve proper focus.

Most phones use a combination of these techniques, ideally resulting in snappy autofocus.

Have you caught the mobile photography bug?

We hope this helped demystify some of the terminology phone manufacturers use to describe their camera technology. If you feel like diving deeper into mobile photography, we have handy guides on the Google Pixel camera app, plus some tips for beginner mobile photographers.