Let’s talk about pixels. especially, iphone 14 Pixels. More specifically, iphone 14 pro Pixels. Because while the headline news is that the latest Pro models offer a 48MP sensor instead of 12MP, it’s not really the most significant improvement Apple has made to this year’s camera.
Actually, the four The biggest change this year, for me the 48MP sensor is the least important. But bear with me here, because a lot needs to be unpacked before I can explain why I think the 48MP sensor is much less important than this:
- sensor size
- pixel binning
- photonic engine
One 48MP sensor, two 12MP ones
Colloquially, we tend to talk about the iPhone camera in the singular, and then refer to the three different lenses: main, wide, and telephoto. We do this because it’s familiar — that’s how DLSR and mirrorless cameras work, one sensor, many (interchangeable) lenses — and because it’s the illusion that Apple makes up for in simplicity in the Camera app.
The reality, of course, is different. The iPhone Actually Has Three Cameras module, Each camera module is different, and each has its own sensor. When you tap the 3x button, you’re not just selecting the telephoto lens, you’re switching to a different sensor. When you slide-zoom, the Camera app automatically and invisibly selects the appropriate camera module, and Then One does the necessary crops.
Only the main camera module has a 48MP sensor; The other two modules still have 12MP ones.
Apple was completely up front about this when it introduced the new models, but it’s an important detail that some may have missed (emphasis ours):
For the first time ever, there’s a new feature in the Pro lineup 48MP main camera With a quad-pixel sensor that adapts to the image being captured, and features second-generation sensor-shifted optical image stabilization.
48MP sensor works part-time
Even when you’re using the main camera, with its 48MP sensor, you’re still only shooting 12MP photos by default. Again, Apple:
For most photos, the quad-pixel sensor combines every four pixels into one large quad pixel.
When you shoot with only 48 megapixels:
- You are using the main camera (not telephoto or wide-angle)
- You’re shooting in ProRAW (which is off by default)
- you are shooting in good light
If you want to do this, This way. But mostly, you won’t…
Apple’s approach makes sense
You may ask, why are we given a 48MP sensor and then mostly not used?
Apple’s approach makes sense, because, in fact, there are very There are times when shooting in 48MP is better than shooting in 12MP. And since doing so creates huge files that eat up your storage with an intense appetite, there’s no point making it the default.
I can only think of two scenarios where shooting a 48MP image is a useful task:
- You want to print the photo at a larger size
- You need to crop the image a lot
That second reason is also a bit questionable, because if you need to crop that much, you’re better off using a 3x camera.
Now let’s talk about sensor size
There are two big differences when comparing any smartphone camera to a DSLR or high quality mirrorless camera.
One of them is the quality of the lens. Standalone cameras can have much better lenses due to both physical size and cost. It’s not uncommon for a professional or keen amateur photographer to spend a four-figure amount on a lens. Of course smartphone cameras can’t compete with that.
The second is sensor size. All other things being equal, the bigger the sensor, the better the image quality. Smartphones, by the nature of their size, and all the other technology they need to fit in, have much smaller sensors that stand-alone cameras. (They also have limited depth, which imposes another significant limit on sensor size, but we don’t need to get into that.)
A smartphone-sized sensor limits image quality and also makes it hard to achieve shallow depth of field – which is why the iPhone does this artificially with Portrait Mode and Cinematic Video.
Apple’s larger sensor + limited megapixel approach
While there are clearer and less obvious limits to the size of sensors you can use in a smartphone, Apple has historically used larger sensors than other smartphone brands – which is part of the reason why the iPhone has been around for a long time. Till was seen as the go-to phone for camera quality. , (Samsung later switched to do the same.)
But there is another reason. If you want best quality images from smartphone, you also want pixels To be as big as possible.
This is why Apple religiously stuck to 12MP, while brands like Samsung have reduced the sensor to 108MP of the same size. Squeezing too many pixels into a smaller sensor adds up to a fair amount of noise, which is especially noticeable in low-light photos.
OK, it took me a while to get there, but now I can say why I think the bigger sensor, pixel-binning, and photonic engine are a far bigger deal than a 48MP sensor…
#1: iPhone 14 Pro/Max sensor is 65% bigger
This year, the main camera sensor in the iPhone 14 Pro/Max is 65% bigger than last year’s model. Obviously it’s still nothing compared to a standalone camera, but for a smartphone camera, it’s (pun intended) huge!
But, as we mentioned above, if Apple only squeezed four times as many pixels into a 65% larger sensor, it would give really poor quality! That’s why you’ll be mostly shooting 12MP still images. And thanks for this…
To shoot 12MP images on the main camera, Apple uses pixel-binning technology. This means that the data from four pixels is converted into one virtual pixel (average value), so the 48MP sensor is mostly being used as the larger 12MP sensor.
This illustration is simple, but it gives the basic idea:
what does this mean? Pixel size is measured in microns (millionths of a meter). Most premium Android smartphones have pixels somewhere in the 1.1 to 1.8 micron range. The iPhone 14 Pro/Max, when using the sensor in 12MP mode, effectively has pixels measuring 2.44 µm. he is a really significant improvement.
Without pixel-binning, the 48MP sensor would – most of the time – be a downgrade.
#3: Photonic Engine
We know that smartphone cameras certainly cannot compete with standalone cameras in terms of optics and physics, but where they can compete is in computational photography.
Computational photography has been used in SLRs for literally decades. When you switch metering modes, for example, that is instructing the computer inside your DLR to interpret the raw data from the sensor in a different way. Similarly to consumer DSLRs and all mirrorless cameras, you can select from a variety of photo modes, which again tell the microprocessor how to adjust the data from the sensor to achieve the desired result.
So computational photography plays a much bigger role in standalone cameras than many already realize. And Apple is very, very good at computational photography. (Okay, it’s not that good at cinematic video yet, but give it a few years…)
The Photonic Engine is a dedicated chip that powers Apple’s Deep Fusion approach to computational photography, and I’m already seeing a huge difference in dynamic range in photos. (Example to follow in next week’s iPhone 14 diary.) Not just limits, but wise decisions. Which one? shade to bring out, and Which one? Highlight to tame.
The result is significantly better photos, which have as much to do with the software as it does the hardware.
The dramatically larger sensor (in the context of smartphones) is a really big deal when it comes to image quality.
Pixel-binning means Apple has effectively built a larger 12MP sensor for most photos, allowing the benefits of a larger sensor to be felt.
Photonic Engine means a dedicated chip for image processing. I am already seeing the real life benefits of this.
More to follow in the iPhone 14 diary piece, when I put the camera to a more extensive test over the next few days.
FTC: We use income generating auto affiliate links. More.