
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
The best phone camera that you can buy in the US just got better, with some new features coming to the Pixel 3’s Photobooth mode. Now, it has a “kiss cam” that can detect you puckering up and snap a photo at just the right moment.
The real star of the show for the Pixel camera is the AI that puts together the final images. The sensor itself is the same standard 12-MP one that you can find in any number of handsets. Now the Photobooth mode on the Pixel 3 and 3 XL has a new trick – a shutter-free mode that uses machine learning to take selfies automatically. It does this by detecting five key facial expressions: “smiles, tongue-out, kissy/duckface, puffy-cheeks, and surprise.”
That should help with grabbing the best possible selfies, without having to use things like Bluetooth triggers or making sure that everyone in a group is actually looking at the camera
The computer vision powering the shutter-less snaps is pretty darn impressive. It’s an upgraded version of the system put together for the ill-fated Google Clips camera. That model was trained specifically for kissing, so Google’s engineers added the ability to recognize other facial expressions, based on a list that real-world photographers supplied.
Even more impressive is that all the number-crunching is done on-device, with nothing sent back to Google’s servers.
The AI first runs a filter to make sure that nobody in the scene has their eyes closed, are talking, are moving enough to blur, or if it can’t detect the facial expressions that it’s been trained for. If that first test passes, it gives some scores to the scene, which help the app decide if to take the shot or not.
As you can see in the GIF below, the app has a moving white line on the top of the screen. This is a visual indicator of the overall quality of the scene, which once it gets to stage 4, will take the selfie. Those stages are:
(1) no faces clearly seen, (2) faces seen but not paying attention to the camera, (3) faces paying attention but not making key expressions, and (4) faces paying attention with key expressions.
I’m not so sure I like the idea of AI silently judging our selfie skills
It’s only a few short steps from identifying an objectively “good” duck-face to deciding that the human race doesn’t deserve to exist. We’ve all seen Terminator, right? Is that the future we want?
What do you think about the new feature? Is it something you think you’ll get some use from? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.
Editors’ Recommendations:
- Google is supposedly planning to reveal a new Pixel device on May 7
- GameStop will now offer full refunds on games purchased within two days of release
- Waymo just slid its self-driving ride-hail app into the Google Play Store
- Google just gave most Android phones the ability to act as a security key
- Common problems with Android updates and how to fix them
Follow us on Flipboard, Google News, or Apple News
