Connect with us


Google’s Pixel 3 gets a kiss cam mode for its camera app

The wonders of AI… I guess.

Google pixel 3 camera
Image: TechRadar

The best phone camera that you can buy in the US just got better, with some new features coming to the Pixel 3’s Photobooth mode. Now, it has a “kiss cam” that can detect you puckering up and snap a photo at just the right moment.

The real star of the show for the Pixel camera is the AI that puts together the final images. The sensor itself is the same standard 12-MP one that you can find in any number of handsets. Now the Photobooth mode on the Pixel 3 and 3 XL has a new trick – a shutter-free mode that uses machine learning to take selfies automatically. It does this by detecting five key facial expressions: “smiles, tongue-out, kissy/duckface, puffy-cheeks, and surprise.”

That should help with grabbing the best possible selfies, without having to use things like Bluetooth triggers or making sure that everyone in a group is actually looking at the camera

The computer vision powering the shutter-less snaps is pretty darn impressive. It’s an upgraded version of the system put together for the ill-fated Google Clips camera. That model was trained specifically for kissing, so Google’s engineers added the ability to recognize other facial expressions, based on a list that real-world photographers supplied.

Even more impressive is that all the number-crunching is done on-device, with nothing sent back to Google’s servers.


The AI first runs a filter to make sure that nobody in the scene has their eyes closed, are talking, are moving enough to blur, or if it can’t detect the facial expressions that it’s been trained for. If that first test passes, it gives some scores to the scene, which help the app decide if to take the shot or not.

As you can see in the GIF below, the app has a moving white line on the top of the screen. This is a visual indicator of the overall quality of the scene, which once it gets to stage 4, will take the selfie. Those stages are:

(1) no faces clearly seen, (2) faces seen but not paying attention to the camera, (3) faces paying attention but not making key expressions, and (4) faces paying attention with key expressions.


I’m not so sure I like the idea of AI silently judging our selfie skills

It’s only a few short steps from identifying an objectively “good” duck-face to deciding that the human race doesn’t deserve to exist. We’ve all seen Terminator, right? Is that the future we want?

What do you think about the new feature? Is it something you think you’ll get some use from? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Follow us on Flipboard, Google News, or Apple News

Maker, meme-r, and unabashed geek with nearly half a decade of blogging experience. If it runs on electricity (or even if it doesn't), Joe probably has one around his office somewhere. His hobbies include photography, animation, and hoarding Reddit gold.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Deals of the Day

More in AI