Mobile Photography: Why iPhones and Pixel phones give you pictures of amazing quality
Google and Apple have taken their race to improve quality of their cameras beyond lenses and sensors. It is now in the realm of Computational photography
When Google launched the Pixel phones on October 20, 2016, the main attraction was the groundbreaking quality of its one-eyed camera, which was good enough to compete with Apple.
Since the launch, the tech world has been comparing Pixel phones to Apple iPhones. Both the phones, coming from the world’s major manufacturers of hardware and software, are now leading the market.
Both companies launch their new models only once a year. This is unlike the policy of Samsung, Huawei, and Xiomie, who are introducing new phones and features almost every month.
While the iPhones and Pixel phones are equal in terms of resolution, processor speed, and lens quality, in-depth studies have shown that Google Pixel performed better when it came to nighttime pictures.
But with the launch of the iPhone 11, things have changed.
Apple released the iPhone 11 and 11 Pro Max, learning from the technology that Google had used to capture images in low-light areas. It no longer wants to lag behind.
Famous tech blogger Frick shares his experience of taking photos in the dark with an iPhone 11 Pro Max phone. Watch the video.
But things took another turn when Google launched Pixel 4 in October, with a claim that its camera can be used even for astrophotography, and not just for taking photos in low-light.
Meanwhile Xiomi, Samsung and Huawei are also in the race, with news that they perform better than others in several performance tests.
When Apple released the iPhone 11 Pro max in September, reviewers called it the best camera phone available on the market. But that may no longer be the case with the release of the Google Pixel 4. Listen to what Google says about AstroPhotography.
Google relies on world-renowned photographers and agencies to evaluate the image quality of the Pixel cameras. Computational photography technique helps Google to a great extent in improving its photo quality.
What is Computational Photography?
Till date, the biggest obstacle to getting quality images was lenses and sensors. The baby sensors in smartphones could not detect slight variations in light and darkness.
Computational photography tries to overcome this. With this, the software is trying to overcome the limitation of the hardware.
Traditionally, a combination of four things creates a good photograph: the subject, the lens, the light, and the camera body. But Google is bringing in a change. And the change is that the software replaces the camera body. Computer technology, not the hardware, determines the quality of the photo.
Smart HDR and HDR Plus
One of the main features of Google Pixel 4, which works in computational photography, is the Live HDR Plus, which can beautifully compose combinations of extreme lights. In addition, the phone also uses machine learning for image processing. Google also claims that the dual exposure control mode allows variations in shadows and lights in the image.
The idea is to take multiple pictures in a second and choose the best picture. The phones use HDR Plus technology for this. They create hybrid images by combining digital zoom with optical zoom. Google research engineers claim that the white-balancing, done using machine learning, improves the natural look of the images. Merging two or more images at different brightness levels gives you good resolution and better details.
But there are some drawbacks to this approach, especially when the subject is moving. But when Apple introduced the HDR technology in 2010 in iPhone 4 with the combination of faster electronics and better algorithms, this had changed. Later, other phones also saw an increase in the capacity of the HDR technology every year. Today, the technology is on their phone camera as the default option for all smart phones.
Now, Google has taken the technology to another level. Instead of inserting frames that have been captured under low-light, normal and over-exposed conditions, the technology stitches together low-light and under-exposed frames. This gives pictures with the best resolution.
This also allows you to set the exposures accurately without getting too much light in the bright side of the fame. In other words, the blue sky will look blue rather than white as in the case of over-exposed conditions. The images will be twice as sharp and with very little noise.
To counter this, the iPhone is coming up with smart HDR technology as an alternative. Apple has introduced this in the 2018 iPhone XS Generation.
Computational technology is also being used in 3D photography. Apple uses two camera lenses to create a stereo effect. But in Pixel 3, Google uses a single main camera lens, image sensor tricks and artificial intelligence to determine the depth of the image and how far apart elements in it are.
Another is portrait model photography, which allows you to blur the background of people or other objects in the picture. This technique, known as bokeh photography, was not available in early mobile photography. It was considered a feature made using expensive DSLR cameras and lenses. But now mathematical algorithms make it possible without the use of large and expensive lenses.
Pixel Night Vision
Night Vision used in Google Pixel 3 is another feature of computational photography. Google Pixel 3 was even compared to the expensive Canon 5D Mark IV SLR camera, which has a large lens, in this feature and its ability to eliminate noise in pictures and maintain the colour tones.
One of the great features of these models is that they can function in low light areas such as bars and restaurants. Lighting had previously been a big challenge in mobile photography in such areas. It also allows street photography.
Now, images of the roads and buildings bathed in neon lighting or night rain can be captured using these techniques while retaining its natural colours. Along with this, the Google Pixel 4 step up from night photography to astrophotography has astounded Apple. In Astrophotography Mode, Google Pixel combines 16 shots of 15 seconds long to stitch together a photo of the night sky and stars.
IPhone 11 Pro Max deep fusion
One of the biggest highlights of this year's iPhone launch is the deep fusion feature in the phone. It is a complicated feature that can be used to capture images from dark and light places. For this, it first shoots four long exposure shots, then four short exposure shots and then one long exposure shot.
The advantage of Deep Fusion is that it analyses all the nine shots within seconds and gives you the best photo resolution. While the iPhone 11 was introduced in September, Deep Fusion was introduced with the iOS 13.2 update release.
Watch the Deep Fusion test conducted by leading tech vloggers Marquez Brownlee and Justin.
Earlier the single-eye cameras of the Google Pixel were not able to compete with the zoom feature of the iPhones. In iPhone, there is a lens with a long focal length just for zoomed images. But Google's Pixel phones are using the tricks of computational technology to counter this.
Images are captured in red, green, and blue in the colour scheme. In normal photography, the maximum colour value gets captured in a pixel. But in Google Pixel mobiles, green and blue elements are also recorded at each pixel. The company says this feature allows digital zoom to capture images better than other zooms.
The small mobile cameras are now doing things that were done only by DSLR cameras. It is interesting to note that such cameras are now able to deal with low-light and other challenging situations.
Signs of far better technologies getting into phones are now seen. An example is the use of artificial intelligence technology in mobile photography. In the coming years, the possibility of using voice assistants to do mobile camera control will not be ruled out. Let's wait and see.
(Credit for information and pictures: Apple, google, Tech crunch, Cnet, The Verge, iJustine, SuperSaf)