21 October 2019
If we have to put one area where Google is excelling every year then it has to be the camera. Giving its camera hardware computational photography support has helped Google to a great extent. Thanks to Pixel Neural Core, Google is unlocking some features which produce amazing results better than its rivals.
In the tech world where every manufacturer has switched to multiple cameras, yes even Apple is on the list, Google is flaunting its skills with a mere two cameras – 12MP Dual Pixel sensor with f/1.7 aperture and a 16MP telephoto sensor with f/2.4 aperture and up to 2x optical zoom. This is possible due to the computational processing and optimization power that Google has been making use of.
One of the many features is the Frequent Faces that aim to improve the pictures of certain people that you’d like to improve. You can select people and the phone will ensure that those subjects get the best focus whenever you’re shooting photos containing multiple people. The camera app stores the face data on the phone that you frequently photograph so that in the future the app will know who to automatically focus on.
This feature came to notice before the Pixel 4 launch, thanks to some digging by folks over at XDADevelopers. And, now this is getting unleashed as more details on the Pixel 4 phones are emerging.
We need to see how this Frequent Faces feature will perform and help people in getting the better shot of their people in a group. For now, the feature sounds interesting where the phone will try to ensure that your subject is always smiling, not blinking and in focus always. Let’s wait for some users’ feedback who have ordered the Pixel 4.