Google Pixel 3 phone aims to automate more daily tasks

Google wants to help you manage daily life, from screening unwanted phone calls to predicting what you'll type.

Update: 2018-10-16 08:11 GMT
The left phone shows traffic information as part of assistant features on Stand and the right phone shows option to screen incoming calls. (Photo credit: AP)

There’s not much about the physical details of Google’s new Pixel 3 phone that you can’t find elsewhere. That bigger display and curved design? Apple and Samsung phones already have that. But the Pixel doesn’t intend to wow people with its hardware anyway. It’s really a showcase for Google’s latest advances in software, particularly in artificial intelligence.

Google wants to help you manage daily life, from screening unwanted phone calls to predicting what you’ll type. The software underscores how Google is tapping its strengths in personalization — and perhaps make money through ads in the process.

You get free services in exchange for letting Google deeper into your life. The Pixel isn’t likely to work for anyone uncomfortable with that trade-off. As impressive as Google’s ambitions are, though, AI is still new at the job of saving us from meaningless tasks. That may not come until an eventual Pixel 9 or Pixel 13. The Pixel 3, out Thursday, starting at about $800, is for those who can’t wait.

CALL SCREENING

No doubt you’ve got an automated call from a telemarketer pitching lower interest rates or vacation shares. Google now lets you fight back with an automated response.

When a mystery call comes in, just hit “Screen call.” Google’s voice assistant takes over and asks for a name and purpose of the call. Transcribed responses appear in real time, so you can decide whether to pick up. You can even request more information by tapping buttons such as “Tell me more.”

It’s a good concept, though it’s not clear that it really saves time. You still need to follow the voice assistant’s chatter; taking the call and hanging up would often be faster. Perhaps Google’s assistant could one day handle all that for you without even ringing the phone, then decide based on the response whether to interrupt your game of “Fortnite.”

But legitimate callers would still find this annoying. It didn’t help that I kept tapping “I can’t understand,” forcing friends to repeat themselves over and over to a robot.

TEXT RECOGNITION

Point the camera at a business card, flyer or other printed text, and Google will try to extract phone numbers and addresses. It also works with QR and barcodes.

If this sounds familiar, it’s because it is. Last year’s Pixel phones had this Google Lens feature, while Samsung has a similar feature called Bixby Vision. The difference: Before, you had to tap something to activate a feature. Now it’s automatic. The feature is most useful if you have a business card or flyer handy when you’re ready to make a call, visit a website or get directions through Google Maps.

You still need a few extra taps to add the information to your contact list. Then there’s the task of tagging which is a number for home, work or cell, and what context you met that person in. That much management might incline you to let those piles of business cards keep stacking up.

A SMART STAND-IN

Place the phone on an optional $79 Pixel Stand charging station, and it can display a rotating set of images from an album you choose. Or you can trust Google to select the best shots. Soon, you’ll be able to let Google’s facial-recognition technology just pick out photos of your family, including new shots as you take them (though not in the European Union, where privacy regulations are tighter.)

The photo display will take you down memory lane, as images from past trips, weddings and family events come up. If only Google was smart enough to remove the party shots never meant for public, sober viewing.

The stand works well as a bedside companion. Before bed, it offers to set your alarm. In the morning, one tap gets you the weather, upcoming calendar events and details about traffic on your commute. It’s not perfect, though: Google still hasn’t figured out that I don’t own a car and have no use for driving directions. Or that I don’t need commute information on weekends.

SMART SNAPPING

Just smile or make a funny face for the selfie camera to automatically take the shot. For regular photos, Google captures extra shots as alternatives, in case someone blinks or blocks the view, though Google’s recommendations aren’t always spot on, as even humans can disagree on what looks best.

The camera also uses software to combine multiple versions of images, essentially filling in some gaps so that zoomed-in shots come out sharper than they normally would. It starts to cross the line of digital manipulation, but pictures do look nice. It’s still no replacement from a real zoom, which the latest Apple and Samsung phones offer via a second lens. Google turns to software to make up for what it lacks in hardware.

A NOTE ON PRIVACY

Google executives emphasise that much of the AI analysis is taking place on the phone, not Google’s servers. When screening calls, for instance, all interactions stay private unless you report the number as spam, in which case it gets added to Google’s database as a warning to others. The basic text recognition for the Google Lens feature and the camera’s image processing also are done on the device. But more advanced Google Lens features, such as recognizing museum paintings, require sending data to Google’s servers. So do most of the requests you make on the Pixel stand.

Tags:    

Similar News